* Docs - k8s - Webhook Certs on Vault
* Adding webhook certs to data-integration overview page
* marking items as code
* Apply suggestions from code review
Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>
* Updating prerequisites intro
* Updating prerequisites intro
* Updating `Create a Vault auth roles that link the policy to each Consul on Kubernetes service account that requires access` to `Link the Vault policy to Consul workloads`
* changing `Configure the Vault Kubernetes auth role in the Consul on Kubernetes helm chart` to `Update the Consul on Kubernetes helm chart`.
* Changed `Create a Vault PKI role that establishes the domains that it is allowed to issue certificates for` to `Configure allowed domains for PKI certificates`
* Moved `Create a Vault policy that authorizes the desired level of access to the secret` to the Set up per Consul Datacenter section
* Update website/content/docs/k8s/installation/vault/data-integration/webhook-certs.mdx
Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>
* Moving Overview above Prerequisites. Adding sentence where missing after page title.
* Moving Overview above Prerequisites for webhook certs page.
* fixing the end of the overview section that was not moved.
Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>
Having this type live in the agent/consul package makes it difficult to
put anything that relies on token resolution (e.g. the new gRPC services)
in separate packages without introducing import cycles.
For example, if package foo imports agent/consul for the ACLResolveResult
type it means that agent/consul cannot import foo to register its service.
We've previously worked around this by wrapping the ACLResolver to
"downgrade" its return type to an acl.Authorizer - aside from the
added complexity, this also loses the resolved identity information.
In the future, we may want to move the whole ACLResolver into the
acl/resolver package. For now, putting the result type there at least,
fixes the immediate import cycle issues.
This is only configured in xDS when a service with an L7 protocol is
exported.
They also load any relevant trust bundles for the peered services to
eventually use for L7 SPIFFE validation during mTLS termination.
Adds the merge-central-config query param option to the /catalog/node-services/:node-name API,
to get a service definition in the response that is merged with central defaults (proxy-defaults/service-defaults).
Updated the consul connect envoy command to use this option when
retrieving the proxy service details so as to render the bootstrap configuration correctly.
- fix sg: need remote access to test server
- Give the load generator a name
- Update loadtest hcl filename in readme
- Add terraform init
- Disable access to the server machine by default
When our peer deletes the peering it is locally marked as terminated.
This termination should kick off deleting all imported data, but should
not delete the peering object itself.
Keeping peerings marked as terminated acts as a signal that the action
took place.
Once a peering is marked for deletion a new leader routine will now
clean up all imported resources and then the peering itself.
A lot of the logic was grabbed from the namespace/partitions deferred
deletions but with a handful of simplifications:
- The rate limiting is not configurable.
- Deleting imported nodes/services/checks is done by deleting nodes with
the Txn API. The services and checks are deleted as a side-effect.
- There is no "round rate limiter" like with namespaces and partitions.
This is because peerings are purely local, and deleting a peering in
the datacenter does not depend on deleting data from other DCs like
with WAN-federated namespaces. All rate limiting is handled by the
Raft rate limiter.