The new controller caches are initialized before the DependencyMappers or the
Reconciler run, but importantly they are not populated. The expectation is that
when the WatchList call is made to the resource service it will send an initial
snapshot of all resources matching a single type, and then perpetually send
UPSERT/DELETE events afterward. This initial snapshot will cycle through the
caching layer and will catch it up to reflect the stored data.
Critically the dependency mappers and reconcilers will race against the restoration
of the caches on server startup or leader election. During this time it is possible a
mapper or reconciler will use the cache to lookup a specific relationship and
not find it. That very same reconciler may choose to then recompute some
persisted resource and in effect rewind it to a prior computed state.
Change
- Since we are updating the behavior of the WatchList RPC, it was aligned to
match that of pbsubscribe and pbpeerstream using a protobuf oneof instead of the enum+fields option.
- The WatchList rpc now has 3 alternating response events: Upsert, Delete,
EndOfSnapshot. When set the initial batch of "snapshot" Upserts sent on a new
watch, those operations will be followed by an EndOfSnapshot event before beginning
the never-ending sequence of Upsert/Delete events.
- Within the Controller startup code we will launch N+1 goroutines to execute WatchList
queries for the watched types. The UPSERTs will be applied to the nascent cache
only (no mappers will execute).
- Upon witnessing the END operation, those goroutines will terminate.
- When all cache priming routines complete, then the normal set of N+1 long lived
watch routines will launch to officially witness all events in the system using the
primed cached.
* Use a full EndpointRef on ComputedRoutes targets instead of just the ID
Today, the `ComputedRoutes` targets have the appropriate ID set for their `ServiceEndpoints` reference; however, the `MeshPort` and `RoutePort` are assumed to be that of the target when adding the endpoints reference in the sidecar's `ProxyStateTemplate`.
This is problematic when the target lives behind a `MeshGateway` and the `Mesh/RoutePort` used in the sidecar's `ProxyStateTemplate` should be that of the `MeshGateway` instead of the target.
Instead of assuming the `MeshPort` and `RoutePort` when building the `ProxyStateTemplate` for the sidecar, let's just add the full `EndpointRef` -- including the ID and the ports -- when hydrating the computed destinations.
* Make sure the UID from the existing ServiceEndpoints makes it onto ComputedRoutes
* Update test assertions
* Undo confusing whitespace change
* Remove one-line function wrapper
* Use plural name for endpoints ref
* Add constants for gateway name, kind and port names
[OG Author: michael.zalimeni@hashicorp.com, rebase needed a separate PR]
* v2: support virtual port in Service port references
In addition to Service target port references, allow users to specify a
port by stringified virtual port value. This is useful in environments
such as Kubernetes where typical configuration is written in terms of
Service virtual ports rather than workload (pod) target port names.
Retaining the option of referencing target ports by name supports VMs,
Nomad, and other use cases where virtual ports are not used by default.
To support both uses cases at once, we will strictly interpret port
references based on whether the value is numeric. See updated
`ServicePort` docs for more details.
* v2: update service ref docs for virtual port support
Update proto and generated .go files with docs reflecting virtual port
reference support.
* v2: add virtual port references to L7 topo test
Add coverage for mixed virtual and target port references to existing
test.
* update failover policy controller tests to work with computed failover policy and assert error conditions against FailoverPolicy and ComputedFailoverPolicy resources
* accumulate services; don't overwrite them in enterprise
---------
Co-authored-by: Michael Zalimeni <michael.zalimeni@hashicorp.com>
Co-authored-by: R.B. Boyer <rb@hashicorp.com>
* If a workload does not implement a port, it should not be included in the list of endpoints for the Envoy cluster for that port.
* Adds tenancy tests for xds controller and xdsv2 resource generation, and adds all those files.
* The original change in this PR was for filtering the list of endpoints by the port being routed to (bullet 1). Since I made changes to sidecarproxycontroller golden files, I realized some of the golden files were unused because of the tenancy changes, so when I deleted those, that broke xds controller tests which weren't correctly using tenancy. So when I fixed that, then the xdsv2 tests broke, so I added tenancy support there too. So now, from sidecarproxy controller -> xds controller -> xdsv2 we now have tenancy support and all the golden files are lined up.
* API Gateway proto
* fix lint issue
* new line
* run make proto format
* regened with comment
* lint
* utilizie existing TLS struct
* Update proto-public/pbmesh/v2beta1/api_gateway.proto
Co-authored-by: Nathan Coleman <nathan.coleman@hashicorp.com>
* generated file
* Update proto-public/pbmesh/v2beta1/api_gateway.proto
* regen with comment
* format the comment
---------
Co-authored-by: Nathan Coleman <nathan.coleman@hashicorp.com>
* Exported services api implemented
* Tests added, refactored code
* Adding server tests
* changelog added
* Proto gen added
* Adding codegen changes
* changing url, response object
* Fixing lint error by having namespace and partition directly
* Tests changes
* refactoring tests
* Simplified uniqueness logic for exported services, sorted the response in order of service name
* Fix lint errors, refactored code
* Implement In-Process gRPC for use by controller caching/indexing
This replaces the pipe base listener implementation we were previously using. The new style CAN avoid cloning resources which our controller caching/indexing is taking advantage of to not duplicate resource objects in memory.
To maintain safety for controllers and for them to be able to modify data they get back from the cache and the resource service, the client they are presented in their runtime will be wrapped with an autogenerated client which clones request and response messages as they pass through the client.
Another sizable change in this PR is to consolidate how server specific gRPC services get registered and managed. Before this was in a bunch of different methods and it was difficult to track down how gRPC services were registered. Now its all in one place.
* Fix race in tests
* Ensure the resource service is registered to the multiplexed handler for forwarding from client agents
* Expose peer streaming on the internal handler
* Add HCCLink resource type
* Register HCCLink resource type with basic validation
* Add validation for required fields
* Add test for default ACLs
* Add no-op controller for HCCLink
* Add resource-apis semantic validation check in hcclink controller
* Add copyright headers
* Rename HCCLink to Link
* Add hcp_cluster_url to link proto
* Update 'disabled' reason with more detail
* Update link status name to consul.io/hcp/link
* Change link version from v1 to v2
* Use feature flag/experiment to enable v2 resources with HCP
Adjust type + field names for ComputedExportedServices
The existing type and field names in `ComputedExportedServices` are confusing to work with.
For example, the mechanics of looping through services and their consumers wind up being:
```go
// The field name here doesn't reflect what is actually at each index of the list
for _, service := range exportedServices.Consumers {
for _, consumer := range service.Consumers {
// The prefix matching the type here causes stutter when reading and
// isn't consistent with naming conventions for tenancy in pbresource
tenancy := consumer.ConsumerTenancy
}
}
```
* NET-6945 - Replace usage of deprecated Envoy field envoy.config.core.v3.HeaderValueOption.append
* update proto for v2 and then update xds v2 logic
* add changelog
* Update 20078.txt to be consistent with existing changelog entries
* swap enum values tomatch envoy.
* Upgrade hcp-sdk-go to latest version v0.73
Changes:
- go get github.com/hashicorp/hcp-sdk-go
- go mod tidy
* From upgrade: regenerate protobufs for upgrade from 1.30 to 1.31
Ran: `make proto`
Slack: https://hashicorp.slack.com/archives/C0253EQ5B40/p1701105418579429
* From upgrade: fix mock interface implementation
After upgrading, there is the following compile error:
cannot use &mockHCPCfg{} (value of type *mockHCPCfg) as "github.com/hashicorp/hcp-sdk-go/config".HCPConfig value in return statement: *mockHCPCfg does not implement "github.com/hashicorp/hcp-sdk-go/config".HCPConfig (missing method Logout)
Solution: update the mock to have the missing Logout method
* From upgrade: Lint: remove usage of deprecated req.ServerState.TLS
Due to upgrade, linting is erroring due to usage of a newly deprecated field
22:47:56 [consul]: make lint
--> Running golangci-lint (.)
agent/hcp/testing.go:157:24: SA1019: req.ServerState.TLS is deprecated: use server_tls.internal_rpc instead. (staticcheck)
time.Until(time.Time(req.ServerState.TLS.CertExpiry)).Hours()/24,
^
* From upgrade: adjust oidc error message
From the upgrade, this test started failing:
=== FAIL: internal/go-sso/oidcauth TestOIDC_ClaimsFromAuthCode/failed_code_exchange (re-run 2) (0.01s)
oidc_test.go:393: unexpected error: Provider login failed: Error exchanging oidc code: oauth2: "invalid_grant" "unexpected auth code"
Prior to the upgrade, the error returned was:
```
Provider login failed: Error exchanging oidc code: oauth2: cannot fetch token: 401 Unauthorized\nResponse: {\"error\":\"invalid_grant\",\"error_description\":\"unexpected auth code\"}\n
```
Now the error returned is as below and does not contain "cannot fetch token"
```
Provider login failed: Error exchanging oidc code: oauth2: "invalid_grant" "unexpected auth code"
```
* Update AgentPushServerState structs with new fields
HCP-side changes for the new fields are in:
https://github.com/hashicorp/cloud-global-network-manager-service/pull/1195/files
* Minor refactor for hcpServerStatus to abstract tlsInfo into struct
This will make it easier to set the same tls-info information to both
- status.TLS (deprecated field)
- status.ServerTLSMetadata (new field to use instead)
* Update hcpServerStatus to parse out information for new fields
Changes:
- Improve error message and handling (encountered some issues and was confused)
- Set new field TLSInfo.CertIssuer
- Collect certificate authority metadata and set on TLSInfo.CertificateAuthorities
- Set TLSInfo on both server.TLS and server.ServerTLSMetadata.InternalRPC
* Update serverStatusToHCP to convert new fields to GNM rpc
* Add changelog
* Feedback: connect.ParseCert, caCerts
* Feedback: refactor and unit test server status
* Feedback: test to use expected struct
* Feedback: certificate with intermediate
* Feedback: catch no leaf, remove expectedErr
* Feedback: update todos with jira ticket
* Feedback: mock tlsConfigurator
* add build tags/import k8s specific proto packages
* fix generated import paths
* fix gomod linting issue
* mod tidy every go mod file
* revert protobuff version, take care of in different pr
* cleaned up new lines
* added newline to end of file
* Generate resource_types for MeshGateway by specifying spec option
* Register MeshGateway type w/ TODOs for hooks
* Initialize controller for MeshGateway resources
* Add meshgateway to list of v2 resource dependencies for golden test
* Scope MeshGateway resource to partition