docs: initial documentation for the new State Store (#17315)

This commit is contained in:
Dan Upton 2023-05-15 12:34:36 +01:00 committed by GitHub
parent 95f462d5f1
commit 879b775459
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
8 changed files with 123 additions and 0 deletions

View File

@ -25,6 +25,7 @@ be found in the public [user documentation].
1. [Agent Configuration](./config) 1. [Agent Configuration](./config)
1. [RPC](./rpc) 1. [RPC](./rpc)
1. [Cluster Persistence](./persistence) 1. [Cluster Persistence](./persistence)
1. [Resources and Controllers](./resources)
1. [Client Agent](./client-agent) 1. [Client Agent](./client-agent)
1. [Service Discovery](./service-discovery) 1. [Service Discovery](./service-discovery)
1. [Service Mesh (Connect)](./service-mesh) 1. [Service Mesh (Connect)](./service-mesh)

View File

@ -1,5 +1,10 @@
# Cluster Persistence # Cluster Persistence
> **Note**
> While the content of this document is still accurate, it doesn't cover the new
> generic resource-oriented storage layer introduced in Consul 1.16. Please see
> [Resources](../resources) for more information.
The cluser persistence subsystem runs entirely in Server Agents. It handles both read and The cluser persistence subsystem runs entirely in Server Agents. It handles both read and
write requests from the [RPC] subsystem. See the [Consul Architecture Guide] for an write requests from the [RPC] subsystem. See the [Consul Architecture Guide] for an
introduction to the Consul deployment architecture and the [Consensus Protocol] used by introduction to the Consul deployment architecture and the [Consensus Protocol] used by

111
docs/resources/README.md Normal file
View File

@ -0,0 +1,111 @@
# Resources
Consul 1.16 introduced a set of [generic APIs] for managing resources, and a
[controller runtime] for building functionality on top of them.
[generic APIs]: ../../proto-public/pbresource/resource.proto
[controller runtime]: ../../internal/controller
Previously, adding features to Consul involved making changes at every layer of
the stack, including: HTTP handlers, RPC handlers, MemDB tables, Raft
operations, and CLI commands.
This architecture made sense when the product was maintained by a small core
group who could keep the entire system in their heads, but presented significant
collaboration, ownership, and onboarding challenges when our contributor base
expanded to many engineers, across several teams, and the product grew in
complexity.
In the new model, teams can work with much greater autonomy by building on top
of a shared platform and own their resource types and controllers.
## Architecture Overview
![architecture diagram](./architecture-overview.png)
<sup>[source](https://whimsical.com/state-store-v2-UKE6SaEPXNc4UrZBrZj4Kg)</sup>
Our resource-oriented architecture comprises the following components:
#### Resource Service
[Resource Service](../../proto-public/pbresource/resource.proto) is a gRPC
service that contains the shared logic for creating, reading, updating,
deleting, and watching resources. It will be consumed by controllers, our
Kubernetes integration, the CLI, and mapped to an HTTP+JSON API.
#### Type Registry
[Type Registry](../../internal/resource/registry.go) is where teams register
their resource types, along with hooks for performing structural validation,
authorization, etc.
#### Storage Backend
[Storage Backend](../../internal/storage/storage.go) is an abstraction over
low-level storage primitives. Today, there are two implementations (Raft and
an in-memory backend for tests) but in the future, we envisage external storage
systems such as the Kubernetes API or an RDBMS could be supported which would
reduce operational complexity for our customers.
#### Controllers
[Controllers](../../internal/controller/api.go) implement Consul's business
logic using asynchronous control loops that respond to changes in resources.
## Raft Storage Backend
Our [Raft Storage Backend](../../internal/storage/raft/backend.go) integrates
with the existing Raft machinery (e.g. FSM) used by the [old state store]. It
also transparently forwards writes and strongly consistent reads to the leader
over gRPC.
There's quite a lot going on here, so to dig into the details, let's take a look
at how a write operation is handled.
[old state store]: ../persistence/
![raft storage backend diagram](./raft-backend.png)
<sup>[source](https://whimsical.com/state-store-v2-UKE6SaEPXNc4UrZBrZj4Kg)</sup>
#### Steps 1 & 2
User calls the resource service's `Write` endpoint, on a Raft follower, which
in-turn calls the storage backend's `WriteCAS` method.
#### Steps 3 & 4
The storage backend determines that the current server is a Raft follower, and
forwards the operation to the leader via a gRPC [forwarding service] listening
on the multiplexed RPC port ([`ports.server`]).
[forwarding service]: ../../proto/private/pbstorage/raft.proto
[`ports.server`]: https://developer.hashicorp.com/consul/docs/agent/config/config-files#server_rpc_port
#### Step 5
The leader's storage backend serializes the operation to protobuf and applies it
to the Raft log. As we need to share the Raft log with the old state store, we go
through the [`consul.raftHandle`](../../agent/consul/raft_handle.go) and
[`consul.Server`](../../agent/consul/server/server.go) which applies a msgpack
envelope and type byte prefix.
#### Step 6
Raft consensus happens! Once the log has been committed, it is applied to the
[FSM](../../agent/consul/fsm/fsm.go) which calls the storage backend's `Apply`
method to apply the protobuf-encoded operation to the [`inmem.Store`].
[`inmem.Store`]: ../../internal/storage/inmem/store.go
#### Steps 7, 8, 9
At this point, the operation is complete. The forwarding service returns a
successful response, as does the follower's storage backend, and the user
gets a successful response too.
#### Steps 10 & 11
Asynchronously, the log is replicated to followers and applied to their storage
backends.

Binary file not shown.

After

Width:  |  Height:  |  Size: 122 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 300 KiB

View File

@ -44,6 +44,9 @@ import (
// intended to communicate over Consul's multiplexed server port (which handles // intended to communicate over Consul's multiplexed server port (which handles
// TLS). // TLS).
// //
// For more information, see here:
// https://github.com/hashicorp/consul/tree/main/docs/resources#raft-storage-backend
//
// You must call Run before using the backend. // You must call Run before using the backend.
func NewBackend(h Handle, l hclog.Logger) (*Backend, error) { func NewBackend(h Handle, l hclog.Logger) (*Backend, error) {
s, err := inmem.NewStore() s, err := inmem.NewStore()

View File

@ -7,6 +7,8 @@
// protoc (unknown) // protoc (unknown)
// source: pbresource/resource.proto // source: pbresource/resource.proto
// For more information, see: https://github.com/hashicorp/consul/tree/main/docs/resources
package pbresource package pbresource
import ( import (

View File

@ -3,6 +3,7 @@
syntax = "proto3"; syntax = "proto3";
// For more information, see: https://github.com/hashicorp/consul/tree/main/docs/resources
package hashicorp.consul.resource; package hashicorp.consul.resource;
import "annotations/ratelimit/ratelimit.proto"; import "annotations/ratelimit/ratelimit.proto";