* Generate resource_types for MeshGateway by specifying spec option
* Register MeshGateway type w/ TODOs for hooks
* Initialize controller for MeshGateway resources
* Add meshgateway to list of v2 resource dependencies for golden test
* Scope MeshGateway resource to partition
* xds: Ensure v2 route match is populated for gRPC
Similar to HTTP, ensure that route match config (which is required by
Envoy) is populated when default values are used.
Because the default matches generated for gRPC contain a single empty
`GRPCRouteMatch`, and that proto does not directly support prefix-based
config, an interpretation of the empty struct is needed to generate the
same output that the `HTTPRouteMatch` is explicitly configured to
provide in internal/mesh/internal/controllers/routes/generate.go.
* xds: Ensure protocol set for gRPC resources
Add explicit protocol in `ProxyStateTemplate` builders and validate it
is always set on clusters. This ensures that HTTP filters and
`http2_protocol_options` are populated in all the necessary places for
gRPC traffic and prevents future unintended omissions of non-TCP
protocols.
Co-authored-by: John Murret <john.murret@hashicorp.com>
---------
Co-authored-by: John Murret <john.murret@hashicorp.com>
* xdsv2: support l7 by adding xfcc policy/headers, tweaking routes, and make a bunch of listeners l7 tests pass
* sidecarproxycontroller: add l7 local app support
* trafficpermissions: make l4 traffic permissions work on l7 workloads
* rename route name field for consistency with l4 cluster name field
* resolve conflicts and rebase
* fix: ensure route name is used in l7 destination route name as well. previously it was only in the route names themselves, now the route name and l7 destination route name line up
This change builds on #19043 and #19067 and updates the sidecar controller to use those computed resources. This achieves several benefits:
* The cache is now simplified which helps us solve for previous bugs (such as multiple Upstreams/Destinations targeting the same service would overwrite each other)
* We no longer need proxy config cache
* We no longer need to do merging of proxy configs as part of the controller logic
* Controller watches are simplified because we no longer need to have complex mapping using cache and can instead use the simple ReplaceType mapper.
It also makes several other improvements/refactors:
* Unifies all caches into one. This is because originally the caches were more independent, however, now that they need to interact with each other it made sense to unify them where sidecar proxy controller uses one cache with 3 bimappers
* Unifies cache and mappers. Mapper already needed all caches anyway and so it made sense to make the cache do the mapping also now that the cache is unified.
* Gets rid of service endpoints watches. This was needed to get updates in a case when service's identities have changed and we need to update proxy state template's spiffe IDs for those destinations. This will however generate a lot of reconcile requests for this controller as service endpoints objects can change a lot because they contain workload's health status. This is solved by adding a status to the service object tracking "bound identities" and have service endpoints controller update it. Having service's status updated allows us to get updates in the sidecar proxy controller because it's already watching service objects
* Add a watch for workloads. We need it so that we get updates if workload's ports change. This also ensures that we update cached identities in case workload's identity changes.
This commit adds a new type ComputedDestinations that will contain all destinations from any Destinations resources and will be name-aligned with a workload. This also adds an explicit-destinations controller that computes these resources.
This is needed to simplify the tracking we need to do currently in the sidecar-proxy controller and makes it easier to query all explicit destinations that apply to a workload.
* Introduce a new type `ComputedProxyConfiguration` and add a controller for it. This is needed for two reasons. The first one is that external integrations like kubernetes may need to read the fully computed and sorted proxy configuration per workload. The second reasons is that it makes sidecar-proxy controller logic quite a bit simpler as it no longer needs to do this.
* Generalize workload selection mapper and fix a bug where it would delete IDs from the tree if only one is left after a removal is done.
Convert more of the xRoutes features that were skipped in an earlier PR into ComputedRoutes and make them work:
- DestinationPolicy defaults
- more timeouts
- load balancer policy
- request/response header mutations
- urlrewrite
- GRPCRoute matches
xRoute resource types contain a slice of parentRefs to services that they
manipulate traffic for. All xRoutes that have a parentRef to given Service
will be merged together to generate a ComputedRoutes resource
name-aligned with that Service.
This means that a write of an xRoute with 2 parent ref pointers will cause
at most 2 reconciles for ComputedRoutes.
If that xRoute's list of parentRefs were ever to be reduced, or otherwise
lose an item, that subsequent map event will only emit events for the current
set of refs. The removed ref will not cause the generated ComputedRoutes
related to that service to be re-reconciled to omit the influence of that xRoute.
To combat this, we will store on the ComputedRoutes resource a
BoundResources []*pbresource.Reference field with references to all
resources that were used to influence the generated output.
When the routes controller reconciles, it will use a bimapper to index this
influence, and the dependency mappers for the xRoutes will look
themselves up in that index to discover additional (former) ComputedRoutes
that need to be notified as well.