* remove v2 tenancy, catalog, and mesh
- Inline the v2tenancy experiment to false
- Inline the resource-apis experiment to false
- Inline the hcp-v2-resource-apis experiment to false
- Remove ACL policy templates and rule language changes related to
workload identities (a v2-only concept) (e.g. identity and
identity_prefix)
- Update the gRPC endpoint used by consul-dataplane to no longer respond
specially for v2
- Remove stray v2 references scattered throughout the DNS v1.5 newer
implementation.
* changelog
* go mod tidy on consul containers
* lint fixes from ENT
---------
Co-authored-by: John Murret <john.murret@hashicorp.com>
When the v2 catalog experiment is enabled the old v1 catalog apis will be
forcibly disabled at both the API (json) layer and the RPC (msgpack) layer.
This will also disable anti-entropy as it uses the v1 api.
This includes all of /v1/catalog/*, /v1/health/*, most of /v1/agent/*,
/v1/config/*, and most of /v1/internal/*.
* Refactors the leafcert package to not have a dependency on agent/consul and agent/cache to avoid import cycles. This way the xds controller can just import the leafcert package to use the leafcert manager.
The leaf cert logic in the controller:
* Sets up watches for leaf certs that are referenced in the ProxyStateTemplate (which generates the leaf certs too).
* Gets the leaf cert from the leaf cert cache
* Stores the leaf cert in the ProxyState that's pushed to xds
* For the cert watches, this PR also uses a bimapper + a thin wrapper to map leaf cert events to related ProxyStateTemplates
Since bimapper uses a resource.Reference or resource.ID to map between two resource types, I've created an internal type for a leaf certificate to use for the resource.Reference, since it's not a v2 resource.
The wrapper allows mapping events to resources (as opposed to mapping resources to resources)
The controller tests:
Unit: Ensure that we resolve leaf cert references
Lifecycle: Ensure that when the CA is updated, the leaf cert is as well
Also adds a new spiffe id type, and adds workload identity and workload identity URI to leaf certs. This is so certs are generated with the new workload identity based SPIFFE id.
* Pulls out some leaf cert test helpers into a helpers file so it
can be used in the xds controller tests.
* Wires up leaf cert manager dependency
* Support getting token from proxytracker
* Add workload identity spiffe id type to the authorize and sign functions
---------
Co-authored-by: John Murret <john.murret@hashicorp.com>
* Adding explicit MPL license for sub-package
This directory and its subdirectories (packages) contain files licensed with the MPLv2 `LICENSE` file in this directory and are intentionally licensed separately from the BSL `LICENSE` file at the root of this repository.
* Adding explicit MPL license for sub-package
This directory and its subdirectories (packages) contain files licensed with the MPLv2 `LICENSE` file in this directory and are intentionally licensed separately from the BSL `LICENSE` file at the root of this repository.
* Updating the license from MPL to Business Source License
Going forward, this project will be licensed under the Business Source License v1.1. Please see our blog post for more details at <Blog URL>, FAQ at www.hashicorp.com/licensing-faq, and details of the license at www.hashicorp.com/bsl.
* add missing license headers
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
---------
Co-authored-by: hashicorp-copywrite[bot] <110428419+hashicorp-copywrite[bot]@users.noreply.github.com>
This can happen when one other node in the cluster such as a client is unable to communicate with the leader server and sees it as failed. When that happens its failing status eventually gets propagated to the other servers in the cluster and eventually this can result in RPCs returning “No cluster leader” error.
That error is misleading and unhelpful for determing the root cause of the issue as its not raft stability but rather and client -> server networking issue. Therefore this commit will add a new error that will be returned in that case to differentiate between the two cases.
This endpoint aggregates all checks related to <service id> on the agent
and return an appropriate http code + the string describing the worst
check.
This allows to cleanly expose service status to other component, hiding
complexity of multiple checks.
This is especially useful to use consul to feed a load balancer which
would delegate health checking to consul agent.
Exposing this endpoint on the agent is necessary to avoid a hit on
consul servers and avoid decreasing resiliency (this endpoint will work
even if there is no consul leader in the cluster).
* Adds client-side retry for no leader errors.
This paves over the case where the client was connected to the leader
when it loses leadership.
* Adds a configurable server RPC drain time and a fail-fast path for RPCs.
When a server leaves it gets removed from the Raft configuration, so it will
never know who the new leader server ends up being. Without this we'd be
doomed to wait out the RPC hold timeout and then fail. This makes things fail
a little quicker while a sever is draining, and since we added a client retry
AND since the server doing this has already shut down and left the Serf LAN,
clients should retry against some other server.
* Makes the RPC hold timeout configurable.
* Reorders struct members.
* Sets the RPC hold timeout default for test servers.
* Bumps the leave drain time up to 5 seconds.
* Robustifies retries with a simpler client-side RPC hold.
* Reverts untended delete.
* Added rate limiting for agent RPC calls.
* Initializes the rate limiter based on the config.
* Adds the rate limiter into the snapshot RPC path.
* Adds unit tests for the RPC rate limiter.
* Groups the RPC limit parameters under "limits" in the config.
* Adds some documentation about the RPC limiter.
* Sends a 429 response when the rate limiter kicks in.
* Adds docs for new telemetry.
* Makes snapshot telemetry look like RPC telemetry and cleans up comments.