Protobuf Refactoring for Multi-Module Cleanliness
This commit includes the following:
Moves all packages that were within proto/ to proto/private
Rewrites imports to account for the packages being moved
Adds in buf.work.yaml to enable buf workspaces
Names the proto-public buf module so that we can override the Go package imports within proto/buf.yaml
Bumps the buf version dependency to 1.14.0 (I was trying out the version to see if it would get around an issue - it didn't but it also doesn't break things and it seemed best to keep up with the toolchain changes)
Why:
In the future we will need to consume other protobuf dependencies such as the Google HTTP annotations for openapi generation or grpc-gateway usage.
There were some recent changes to have our own ratelimiting annotations.
The two combined were not working when I was trying to use them together (attempting to rebase another branch)
Buf workspaces should be the solution to the problem
Buf workspaces means that each module will have generated Go code that embeds proto file names relative to the proto dir and not the top level repo root.
This resulted in proto file name conflicts in the Go global protobuf type registry.
The solution to that was to add in a private/ directory into the path within the proto/ directory.
That then required rewriting all the imports.
Is this safe?
AFAICT yes
The gRPC wire protocol doesn't seem to care about the proto file names (although the Go grpc code does tack on the proto file name as Metadata in the ServiceDesc)
Other than imports, there were no changes to any generated code as a result of this.
* [API Gateway] Add integration test for conflicted TCP listeners
* [API Gateway] Update simple test to leverage intentions and multiple listeners
* Fix broken unit test
* PR suggestions
* Stub proxycfg handler for API gateway
* Add Service Kind constants/handling for API Gateway
* Begin stubbing for SDS
* Add new Secret type to xDS order of operations
* Continue stubbing of SDS
* Iterate on proxycfg handler for API gateway
* Handle BoundAPIGateway config entry subscription in proxycfg-glue
* Add API gateway to config snapshot validation
* Add API gateway to config snapshot clone, leaf, etc.
* Subscribe to bound route + cert config entries on bound-api-gateway
* Track routes + certs on API gateway config snapshot
* Generate DeepCopy() for types used in watch.Map
* Watch all active references on api-gateway, unwatch inactive
* Track loading of initial bound-api-gateway config entry
* Use proper proto package for SDS mapping
* Use ResourceReference instead of ServiceName, collect resources
* Fix typo, add + remove TODOs
* Watch discovery chains for TCPRoute
* Add TODO for updating gateway services for api-gateway
* make proto
* Regenerate deep-copy for proxycfg
* Set datacenter on upstream ID from query source
* Watch discovery chains for http-route service backends
* Add ServiceName getter to HTTP+TCP Service structs
* Clean up unwatched discovery chains on API Gateway
* Implement watch for ingress leaf certificate
* Collect upstreams on http-route + tcp-route updates
* Remove unused GatewayServices update handler
* Remove unnecessary gateway services logic for API Gateway
* Remove outdate TODO
* Use .ToIngress where appropriate, including TODO for cleaning up
* Cancel before returning error
* Remove GatewayServices subscription
* Add godoc for handlerAPIGateway functions
* Update terminology from Connect => Consul Service Mesh
Consistent with terminology changes in https://github.com/hashicorp/consul/pull/12690
* Add missing TODO
* Remove duplicate switch case
* Rerun deep-copy generator
* Use correct property on config snapshot
* Remove unnecessary leaf cert watch
* Clean up based on code review feedback
* Note handler properties that are initialized but set elsewhere
* Add TODO for moving helper func into structs pkg
* Update generated DeepCopy code
* gofmt
* Generate DeepCopy() for API gateway listener types
* Improve variable name
* Regenerate DeepCopy() code
* Fix linting issue
* Temporarily remove the secret type from resource generation
Fix local mesh gateway with peering discovery chains.
Prior to this patch, discovery chains with peers would not
properly honor the mesh gateway mode for two reasons.
1. An incorrect target upstream ID was used to lookup the
mesh gateway mode. To fix this, the parent upstream uid is
now used instead of the discovery-chain-target-uid to find
the intended mesh gateway mode.
2. The watch for local mesh gateways was never initialized
for discovery chains. To fix this, the discovery chains are
now scanned, and a local GW watch is spawned if: the mesh
gateway mode is local and the target is a peering connection.
This is the OSS portion of enterprise PR 2489.
This PR introduces a server-local implementation of the
proxycfg.InternalServiceDump interface that sources data from a blocking query
against the server's state store.
For simplicity, it only implements the subset of the Internal.ServiceDump RPC
handler actually used by proxycfg - as such the result type has been changed
to IndexedCheckServiceNodes to avoid confusion.
Mesh gateways can use hostnames in their tagged addresses (#7999). This is useful
if you were to expose a mesh gateway using a cloud networking load balancer appliance
that gives you a DNS name but no reliable static IPs.
Envoy cannot accept hostnames via EDS and those must be configured using CDS.
There was already logic when configuring gateways in other locations in the code, but
given the illusions in play for peering the downstream of a peered service wasn't aware
that it should be doing that.
Also:
- ensuring that we always try to use wan-like addresses to cross peer boundaries.
This is the OSS portion of enterprise PRs 1904, 1905, 1906, 1907, 1949,
and 1971.
It replaces the proxycfg manager's direct dependency on the agent cache
with interfaces that will be implemented differently when serving xDS
sessions from a Consul server.
OSS portion of enterprise PR 1857.
This removes (most) references to the `cache.UpdateEvent` type in the
`proxycfg` package.
As we're going to be direct usage of the agent cache with interfaces that
can be satisfied by alternative server-local datasources, it doesn't make
sense to depend on this type everywhere anymore (particularly on the
`state.ch` channel).
We also plan to extract `proxycfg` out of Consul into a shared library in
the future, which would require removing this dependency.
Aside from a fairly rote find-and-replace, the main change is that the
`cache.Cache` and `health.Client` types now accept a callback function
parameter, rather than a `chan<- cache.UpdateEvents`. This allows us to
do the type conversion without running another goroutine.
- `tls.incoming`: applies to the inbound mTLS targeting the public
listener on `connect-proxy` and `terminating-gateway` envoy instances
- `tls.outgoing`: applies to the outbound mTLS dialing upstreams from
`connect-proxy` and `ingress-gateway` envoy instances
Fixes#11966
Transparent proxies typically cannot dial upstreams in remote
datacenters. However, if their upstream configures a redirect to a
remote DC then the upstream targets will be in another datacenter.
In that sort of case we should use the WAN address for the passthrough.
Due to timing, a transparent proxy could have two upstreams to dial
directly with the same address.
For example:
- The orders service can dial upstreams shipping and payment directly.
- An instance of shipping at address 10.0.0.1 is deregistered.
- Payments is scaled up and scheduled to have address 10.0.0.1.
- The orders service receives the event for the new payments instance
before seeing the deregistration for the shipping instance. At this
point two upstreams have the same passthrough address and Envoy will
reject the listener configuration.
To disambiguate this commit considers the Raft index when storing
passthrough addresses. In the example above, 10.0.0.1 would only be
associated with the newer payments service instance.
Transparent proxies can set up filter chains that allow direct
connections to upstream service instances. Services that can be dialed
directly are stored in the PassthroughUpstreams map of the proxycfg
snapshot.
Previously these addresses were not being cleaned up based on new
service health data. The list of addresses associated with an upstream
service would only ever grow.
As services scale up and down, eventually they will have instances
assigned to an IP that was previously assigned to a different service.
When IP addresses are duplicated across filter chain match rules the
listener config will be rejected by Envoy.
This commit updates the proxycfg snapshot management so that passthrough
addresses can get cleaned up when no longer associated with a given
upstream.
There is still the possibility of a race condition here where due to
timing an address is shared between multiple passthrough upstreams.
That concern is mitigated by #12195, but will be further addressed
in a follow-up.
The gist here is that now we use a value-type struct proxycfg.UpstreamID
as the map key in ConfigSnapshot maps where we used to use "upstream
id-ish" strings. These are internal only and used just for bidirectional
trips through the agent cache keyspace (like the discovery chain target
struct).
For the few places where the upstream id needs to be projected into xDS,
that's what (proxycfg.UpstreamID).EnvoyID() is for. This lets us ALWAYS
inject the partition and namespace into these things without making
stuff like the golden testdata diverge.
Previously the datacenter of the gateway was the key identifier, now it
is the datacenter and partition.
When dialing services in other partitions or datacenters we now watch
the appropriate partition.
There is no interaction between these handlers, so splitting them into separate files
makes it easier to discover the full implementation of each kindHandler.