When a large number of upstreams are configured on a single envoy
proxy, there was a chance that it would timeout when waiting for
ClusterLoadAssignments. While this doesn't always immediately cause
issues, consul-dataplane instances appear to consistently drop
endpoints from their configurations after an xDS connection is
re-established (the server dies, random disconnect, etc).
This commit adds an `xds_fetch_timeout_ms` config to service registrations
so that users can set the value higher for large instances that have
many upstreams. The timeout can be disabled by setting a value of `0`.
This configuration was introduced to reduce the risk of causing a
breaking change for users if there is ever a scenario where endpoints
would never be received. Rather than just always blocking indefinitely
or for a significantly longer period of time, this config will affect
only the service instance associated with it.
To ensure that shell code cannot be injected, capture the commit message
in an env var, then format it as needed.
Also fix several other issues with formatting and JSON escaping by
wrapping the entire message in a `toJSON` expression.
This fixes the following race condition:
- Send update endpoints
- Send update cluster
- Recv ACK endpoints
- Recv ACK cluster
Prior to this fix, it would have resulted in the endpoints NOT existing in
Envoy. This occurred because the cluster update implicitly clears the endpoints
in Envoy, but we would never re-send the endpoint data to compensate for the
loss, because we would incorrectly ACK the invalid old endpoint hash. Since the
endpoint's hash did not actually change, they would not be resent.
The fix for this is to effectively clear out the invalid pending ACKs for child
resources whenever the parent changes. This ensures that we do not store the
child's hash as accepted when the race occurs.
An escape-hatch environment variable `XDS_PROTOCOL_LEGACY_CHILD_RESEND` was
added so that users can revert back to the old legacy behavior in the event
that this produces unknown side-effects. Visit the following thread for some
extra context on why certainty around these race conditions is difficult:
https://github.com/envoyproxy/envoy/issues/13009
This bug report and fix was mostly implemented by @ksmiley with some minor
tweaks.
Co-authored-by: Keith Smiley <ksmiley@salesforce.com>
* Add CE version of gateway-upstream-disambiguation
* Use NamespaceOrDefault and PartitionOrDefault
* Add Changelog entry
* Remove the unneeded reassignment
* Use c.ID()
* Adding cli command to list exported services to a peer
* Changelog added
* Addressing docs comments
* Adding test case for no exported services scenario
The client.rpc metric now excludes internal retries for consistency
with client.rpc.exceeded and client.rpc.failed. All of these metrics
now increment at most once per RPC method call, allowing for
accurate calculation of failure / rate limit application occurrence.
Additionally, if an RPC fails because no servers are present,
client.rpc.failed is now incremented.
* Add a make target to run lint-consul-retry on all the modules
* Cleanup sdk/testutil/retry
* Fix a bunch of retry.Run* usage to not use the outer testing.T
* Fix some more recent retry lint issues and pin to v1.4.0 of lint-consul-retry
* Fix codegen copywrite lint issues
* Don’t perform cleanup after each retry attempt by default.
* Use the common testutil.TestingTB interface in test-integ/tenancy
* Fix retry tests
* Update otel access logging extension test to perform requests within the retry block
- Skip notifications for cancelled workflows. Cancellation can be
manual or caused by branch concurrency limits.
- Fix multi-line JSON parsing error by only printing the summary line
of the commit message. We do not need more than this in Slack.
- Update Slack webhook name to match purpose.
* Upgrade hcp-sdk-go to latest version v0.73
Changes:
- go get github.com/hashicorp/hcp-sdk-go
- go mod tidy
* From upgrade: regenerate protobufs for upgrade from 1.30 to 1.31
Ran: `make proto`
Slack: https://hashicorp.slack.com/archives/C0253EQ5B40/p1701105418579429
* From upgrade: fix mock interface implementation
After upgrading, there is the following compile error:
cannot use &mockHCPCfg{} (value of type *mockHCPCfg) as "github.com/hashicorp/hcp-sdk-go/config".HCPConfig value in return statement: *mockHCPCfg does not implement "github.com/hashicorp/hcp-sdk-go/config".HCPConfig (missing method Logout)
Solution: update the mock to have the missing Logout method
* From upgrade: Lint: remove usage of deprecated req.ServerState.TLS
Due to upgrade, linting is erroring due to usage of a newly deprecated field
22:47:56 [consul]: make lint
--> Running golangci-lint (.)
agent/hcp/testing.go:157:24: SA1019: req.ServerState.TLS is deprecated: use server_tls.internal_rpc instead. (staticcheck)
time.Until(time.Time(req.ServerState.TLS.CertExpiry)).Hours()/24,
^
* From upgrade: adjust oidc error message
From the upgrade, this test started failing:
=== FAIL: internal/go-sso/oidcauth TestOIDC_ClaimsFromAuthCode/failed_code_exchange (re-run 2) (0.01s)
oidc_test.go:393: unexpected error: Provider login failed: Error exchanging oidc code: oauth2: "invalid_grant" "unexpected auth code"
Prior to the upgrade, the error returned was:
```
Provider login failed: Error exchanging oidc code: oauth2: cannot fetch token: 401 Unauthorized\nResponse: {\"error\":\"invalid_grant\",\"error_description\":\"unexpected auth code\"}\n
```
Now the error returned is as below and does not contain "cannot fetch token"
```
Provider login failed: Error exchanging oidc code: oauth2: "invalid_grant" "unexpected auth code"
```
* Update AgentPushServerState structs with new fields
HCP-side changes for the new fields are in:
https://github.com/hashicorp/cloud-global-network-manager-service/pull/1195/files
* Minor refactor for hcpServerStatus to abstract tlsInfo into struct
This will make it easier to set the same tls-info information to both
- status.TLS (deprecated field)
- status.ServerTLSMetadata (new field to use instead)
* Update hcpServerStatus to parse out information for new fields
Changes:
- Improve error message and handling (encountered some issues and was confused)
- Set new field TLSInfo.CertIssuer
- Collect certificate authority metadata and set on TLSInfo.CertificateAuthorities
- Set TLSInfo on both server.TLS and server.ServerTLSMetadata.InternalRPC
* Update serverStatusToHCP to convert new fields to GNM rpc
* Add changelog
* Feedback: connect.ParseCert, caCerts
* Feedback: refactor and unit test server status
* Feedback: test to use expected struct
* Feedback: certificate with intermediate
* Feedback: catch no leaf, remove expectedErr
* Feedback: update todos with jira ticket
* Feedback: mock tlsConfigurator
test: Address occasional flakes in sidecarproxy/controller_test.go
We've observed an occasional flake in this test where some state check
fails. Adding in some wait wrappers to these state checks will hopefully
address the issue, assuming it is a simple flake.
* Update catalog and ui endpoints to show APIGateway in gateway service
topology view
* Added initial implementation for service view
* updated ui
* Fix topology view for gateways
* Adding tests for gw controller
* remove unused args
* Undo formatting changes
* Fix call sites for upstream/downstream gw changes
* Add config entry tests
* Fix function calls again
* Move from ServiceKey to ServiceName, cleanup from PR review
* Add additional check for length of services in bound apigateway for
IsSame comparison
* fix formatting for proto
* gofmt
* Add DeepCopy for retrieved BoundAPIGateway
* gofmt
* gofmt
* Rename function to be more consistent
* add build tags/import k8s specific proto packages
* fix generated import paths
* fix gomod linting issue
* mod tidy every go mod file
* revert protobuff version, take care of in different pr
* cleaned up new lines
* added newline to end of file
docs: Add locality examples and troubleshooting
Add further examples and tips for locality-aware routing configuration,
observability, and troubleshooting.
* Add meshconfiguration/controller
* Add MeshConfiguration Registration function
* Fix the TODOs on the RegisterMeshGateway function
* Call RegisterMeshConfiguration
* Add comment to MeshConfigurationRegistration
* Add a test for Reconcile and some comments