* Option to set HCP client at runtime
Allows us to initially set a nil HCP client for the
telemetry provider and update it later.
* Set telemetry provider HCP client in HCP manager
Set the telemetry provider as a dependency and pass it to
the manager. Update the telemetry provider's HCP client
when the HCP manager starts.
* Add a provider interface for the metrics client
This provider will allow us to configure and reconfigure the
retryable HTTP client and the headers for the metrics client.
* Move HTTP retryable client to separate file
Copied directly from the metrics client.
* Abstract HCP specific values in HTTP client
Remove HCP specific references and instead initiate with
a generic TLS configuration and authentication source.
* Set up HTTP client and headers in the provider
Move setup from the metrics client to the HCP telemetry
provider.
* Update the telemetry provider in the HCP manager
Initialize the provider without the HCP configs and then update
it in the HCP manager to enable it.
* Improve test assertion, fix method comment
* Move client provider to metrics client
* Stop the manager on setup error
* Add separate lock for http configuration
* Start telemetry provider in HCP manager
* Update HCP client and config as part of Run
* Remove option to set config at initialization
* Simplify and clean up setting HCP configs
* Add test for telemetry provider Run method
* Fix race condition
* Use clone of HTTP headers
* Only allow initial update and run once
* Increase timeouts for flakey peering test.
* Various test fixes.
* Fix race condition in reconcilePeering.
This resolves an issue where a peering object in the state store was
incorrectly mutated by a function, resulting in the test being flagged as
failing when the -race flag was used.
To fix an issue displaying the current reloaded config in the
v1/agent/self endpoint #18681 caused the agent's internal
config struct member to be deepcopied and replaced on reload.
This is not safe because the field is not protected by a lock, nor
should it be due to how it is accessed by the rest of the system.
This PR does the same deepcopy, but into a new field solely for
the point of capturing the current reloaded values for display
purposes. If there has been no reload then the original config is used.
* Implement In-Process gRPC for use by controller caching/indexing
This replaces the pipe base listener implementation we were previously using. The new style CAN avoid cloning resources which our controller caching/indexing is taking advantage of to not duplicate resource objects in memory.
To maintain safety for controllers and for them to be able to modify data they get back from the cache and the resource service, the client they are presented in their runtime will be wrapped with an autogenerated client which clones request and response messages as they pass through the client.
Another sizable change in this PR is to consolidate how server specific gRPC services get registered and managed. Before this was in a bunch of different methods and it was difficult to track down how gRPC services were registered. Now its all in one place.
* Fix race in tests
* Ensure the resource service is registered to the multiplexed handler for forwarding from client agents
* Expose peer streaming on the internal handler
* Upgrade Go to 1.21
* ci: detect Go backwards compatibility test version automatically
For our submodules and other places we choose to test against previous
Go versions, detect this version automatically from the current one
rather than hard-coding it.
* Add HCCLink resource type
* Register HCCLink resource type with basic validation
* Add validation for required fields
* Add test for default ACLs
* Add no-op controller for HCCLink
* Add resource-apis semantic validation check in hcclink controller
* Add copyright headers
* Rename HCCLink to Link
* Add hcp_cluster_url to link proto
* Update 'disabled' reason with more detail
* Update link status name to consul.io/hcp/link
* Change link version from v1 to v2
* Use feature flag/experiment to enable v2 resources with HCP
ci: Set Go version consistently via .go-version
Ensure Go version is determined consistently for CI and Docker builds
rather than spread across several different files.
The intent is to eventually replace this with use of the `toolchain`
directive in Go 1.21.
Adjust type + field names for ComputedExportedServices
The existing type and field names in `ComputedExportedServices` are confusing to work with.
For example, the mechanics of looping through services and their consumers wind up being:
```go
// The field name here doesn't reflect what is actually at each index of the list
for _, service := range exportedServices.Consumers {
for _, consumer := range service.Consumers {
// The prefix matching the type here causes stutter when reading and
// isn't consistent with naming conventions for tenancy in pbresource
tenancy := consumer.ConsumerTenancy
}
}
```
* Update SCADA provider version
Also update mocks for SCADA provider.
* Create SCADA provider w/o HCP config, then update
Adds a placeholder config option to allow us to initialize a SCADA provider
without the HCP configuration. Also adds an update method to then add the
HCP configuration. We need this to be able to eventually always register a
SCADA listener at startup before the HCP config values are known.
* Pass cloud configuration to HCP manager
Save the entire cloud configuration and pass it to the HCP
manager.
* Update and start SCADA provider in HCP manager
Move config updating and starting to the HCP manager. The HCP manager
will eventually be responsible for all processes that contribute
to linking to HCP.
* NET-6945 - Replace usage of deprecated Envoy field envoy.config.core.v3.HeaderValueOption.append
* update proto for v2 and then update xds v2 logic
* add changelog
* Update 20078.txt to be consistent with existing changelog entries
* swap enum values tomatch envoy.
* NET-6946 - Replace usage of deprecated Envoy field envoy.config.route.v3.HeaderMatcher.safe_regex_match
* removing unrelated changes
* update golden files
* do not set engine type
We've noticed runners appearing to become resource-starved during heavy
CI traffic. While we should try to prevent this by limiting the
scanner's CPU consumption, increasing the runner size should help in the
interim.