These changes are primarily for Consul's UI, where we want to be more
specific about the state a peering is in.
- The "initial" state was renamed to pending, and no longer applies to
peerings being established from a peering token.
- Upon request to establish a peering from a peering token, peerings
will be set as "establishing". This will help distinguish between the
two roles: the cluster that generates the peering token and the
cluster that establishes the peering.
- When marked for deletion, peering state will be set to "deleting".
This way the UI determines the deletion via the state rather than the
"DeletedAt" field.
Co-authored-by: freddygv <freddy@hashicorp.com>
When our peer deletes the peering it is locally marked as terminated.
This termination should kick off deleting all imported data, but should
not delete the peering object itself.
Keeping peerings marked as terminated acts as a signal that the action
took place.
Once a peering is marked for deletion a new leader routine will now
clean up all imported resources and then the peering itself.
A lot of the logic was grabbed from the namespace/partitions deferred
deletions but with a handful of simplifications:
- The rate limiting is not configurable.
- Deleting imported nodes/services/checks is done by deleting nodes with
the Txn API. The services and checks are deleted as a side-effect.
- There is no "round rate limiter" like with namespaces and partitions.
This is because peerings are purely local, and deleting a peering in
the datacenter does not depend on deleting data from other DCs like
with WAN-federated namespaces. All rate limiting is handled by the
Raft rate limiter.
When deleting a peering we do not want to delete the peering and all
imported data in a single operation, since deleting a large amount of
data at once could overload Consul.
Instead we defer deletion of peerings so that:
1. When a peering deletion request is received via gRPC the peering is
marked for deletion by setting the DeletedAt field.
2. A leader routine will monitor for peerings that are marked for
deletion and kick off a throttled deletion of all imported resources
before deleting the peering itself.
This commit mostly addresses point #1 by modifying the peering service
to mark peerings for deletion. Another key change is to add a
PeeringListDeleted state store function which can return all peerings
marked for deletion. This function is what will be watched by the
deferred deletion leader routine.
I noticed that the JSON api endpoints for peerings json encodes protobufs directly, rather than converting them into their `api` package equivalents before marshal/unmarshaling them.
I updated this and used `mog` to do the annoying part in the middle.
Other changes:
- the status enum was converted into the friendlier string form of the enum for readability with tools like `curl`
- some of the `api` library functions were slightly modified to match other similar endpoints in UX (cc: @ndhanushkodi )
- peeringRead returns `nil` if not found
- partitions are NOT inferred from the agent's partition (matching 1.11-style logic)