Add engineering docs for controllers and v2 architecture (#19671)

* add controller docs

* add v2 service mesh docs
This commit is contained in:
Iryna Shustava 2023-11-17 17:55:09 -07:00 committed by GitHub
parent ce66433311
commit d05f67cebd
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 293 additions and 22 deletions

View File

@ -25,7 +25,7 @@ be found in the public [user documentation].
1. [Agent Configuration](./config) 1. [Agent Configuration](./config)
1. [RPC](./rpc) 1. [RPC](./rpc)
1. [Cluster Persistence](./persistence) 1. [Cluster Persistence](./persistence)
1. [Resources and Controllers](./resources) 1. [V2 Architecture](v2-architecture)
1. [Client Agent](./client-agent) 1. [Client Agent](./client-agent)
1. [Service Discovery](./service-discovery) 1. [Service Discovery](./service-discovery)
1. [Service Mesh (Connect)](./service-mesh) 1. [Service Mesh (Connect)](./service-mesh)

View File

@ -3,7 +3,7 @@
> **Note** > **Note**
> While the content of this document is still accurate, it doesn't cover the new > While the content of this document is still accurate, it doesn't cover the new
> generic resource-oriented storage layer introduced in Consul 1.16. Please see > generic resource-oriented storage layer introduced in Consul 1.16. Please see
> [Resources](../resources) for more information. > [Resources](../v2-architecture/controller-architecture) for more information.
The cluser persistence subsystem runs entirely in Server Agents. It handles both read and The cluser persistence subsystem runs entirely in Server Agents. It handles both read and
write requests from the [RPC] subsystem. See the [Consul Architecture Guide] for an write requests from the [RPC] subsystem. See the [Consul Architecture Guide] for an

View File

@ -1,14 +1,14 @@
# Resources # Overview
> **Note** > **Note**
> Looking for guidance on adding new resources and controllers to Consul? Check > Looking for guidance on adding new resources and controllers to Consul? Check
> out the [developer guide](./guide.md). > out the [developer guide](guide.md).
Consul 1.16 introduced a set of [generic APIs] for managing resources, and a Consul 1.16 introduced a set of [generic APIs] for managing resources, and a
[controller runtime] for building functionality on top of them. [controller runtime] for building functionality on top of them.
[generic APIs]: ../../proto-public/pbresource/resource.proto [generic APIs]: ../../../proto-public/pbresource/resource.proto
[controller runtime]: ../../internal/controller [controller runtime]: ../../../internal/controller
Previously, adding features to Consul involved making changes at every layer of Previously, adding features to Consul involved making changes at every layer of
the stack, including: HTTP handlers, RPC handlers, MemDB tables, Raft the stack, including: HTTP handlers, RPC handlers, MemDB tables, Raft
@ -25,7 +25,7 @@ of a shared platform and own their resource types and controllers.
## Architecture Overview ## Architecture Overview
![architecture diagram](./architecture-overview.png) ![architecture diagram](architecture-overview.png)
<sup>[source](https://whimsical.com/state-store-v2-UKE6SaEPXNc4UrZBrZj4Kg)</sup> <sup>[source](https://whimsical.com/state-store-v2-UKE6SaEPXNc4UrZBrZj4Kg)</sup>
@ -33,20 +33,20 @@ Our resource-oriented architecture comprises the following components:
#### Resource Service #### Resource Service
[Resource Service](../../proto-public/pbresource/resource.proto) is a gRPC [Resource Service](../../../proto-public/pbresource/resource.proto) is a gRPC
service that contains the shared logic for creating, reading, updating, service that contains the shared logic for creating, reading, updating,
deleting, and watching resources. It will be consumed by controllers, our deleting, and watching resources. It will be consumed by controllers, our
Kubernetes integration, the CLI, and mapped to an HTTP+JSON API. Kubernetes integration, the CLI, and mapped to an HTTP+JSON API.
#### Type Registry #### Type Registry
[Type Registry](../../internal/resource/registry.go) is where teams register [Type Registry](../../../internal/resource/registry.go) is where teams register
their resource types, along with hooks for performing structural validation, their resource types, along with hooks for performing structural validation,
authorization, etc. authorization, etc.
#### Storage Backend #### Storage Backend
[Storage Backend](../../internal/storage/storage.go) is an abstraction over [Storage Backend](../../../internal/storage/storage.go) is an abstraction over
low-level storage primitives. Today, there are two implementations (Raft and low-level storage primitives. Today, there are two implementations (Raft and
an in-memory backend for tests) but in the future, we envisage external storage an in-memory backend for tests) but in the future, we envisage external storage
systems such as the Kubernetes API or an RDBMS could be supported which would systems such as the Kubernetes API or an RDBMS could be supported which would
@ -54,12 +54,13 @@ reduce operational complexity for our customers.
#### Controllers #### Controllers
[Controllers](../../internal/controller/api.go) implement Consul's business [Controllers](../../../internal/controller/api.go) implement Consul's business
logic using asynchronous control loops that respond to changes in resources. logic using asynchronous control loops that respond to changes in resources.
Please see [Controller docs](controllers.md) for more details about controllers
## Raft Storage Backend ## Raft Storage Backend
Our [Raft Storage Backend](../../internal/storage/raft/backend.go) integrates Our [Raft Storage Backend](../../../internal/storage/raft/backend.go) integrates
with the existing Raft machinery (e.g. FSM) used by the [old state store]. It with the existing Raft machinery (e.g. FSM) used by the [old state store]. It
also transparently forwards writes and strongly consistent reads to the leader also transparently forwards writes and strongly consistent reads to the leader
over gRPC. over gRPC.
@ -69,7 +70,7 @@ at how a write operation is handled.
[old state store]: ../persistence/ [old state store]: ../persistence/
![raft storage backend diagram](./raft-backend.png) ![raft storage backend diagram](raft-backend.png)
<sup>[source](https://whimsical.com/state-store-v2-UKE6SaEPXNc4UrZBrZj4Kg)</sup> <sup>[source](https://whimsical.com/state-store-v2-UKE6SaEPXNc4UrZBrZj4Kg)</sup>
@ -84,24 +85,24 @@ The storage backend determines that the current server is a Raft follower, and
forwards the operation to the leader via a gRPC [forwarding service] listening forwards the operation to the leader via a gRPC [forwarding service] listening
on the multiplexed RPC port ([`ports.server`]). on the multiplexed RPC port ([`ports.server`]).
[forwarding service]: ../../proto/private/pbstorage/raft.proto [forwarding service]: ../../../proto/private/pbstorage/raft.proto
[`ports.server`]: https://developer.hashicorp.com/consul/docs/agent/config/config-files#server_rpc_port [`ports.server`]: https://developer.hashicorp.com/consul/docs/agent/config/config-files#server_rpc_port
#### Step 5 #### Step 5
The leader's storage backend serializes the operation to protobuf and applies it The leader's storage backend serializes the operation to protobuf and applies it
to the Raft log. As we need to share the Raft log with the old state store, we go to the Raft log. As we need to share the Raft log with the old state store, we go
through the [`consul.raftHandle`](../../agent/consul/raft_handle.go) and through the [`consul.raftHandle`](../../../agent/consul/raft_handle.go) and
[`consul.Server`](../../agent/consul/server/server.go) which applies a msgpack [`consul.Server`](../../agent/consul/server/server.go) which applies a msgpack
envelope and type byte prefix. envelope and type byte prefix.
#### Step 6 #### Step 6
Raft consensus happens! Once the log has been committed, it is applied to the Raft consensus happens! Once the log has been committed, it is applied to the
[FSM](../../agent/consul/fsm/fsm.go) which calls the storage backend's `Apply` [FSM](../../../agent/consul/fsm/fsm.go) which calls the storage backend's `Apply`
method to apply the protobuf-encoded operation to the [`inmem.Store`]. method to apply the protobuf-encoded operation to the [`inmem.Store`].
[`inmem.Store`]: ../../internal/storage/inmem/store.go [`inmem.Store`]: ../../../internal/storage/inmem/store.go
#### Steps 7, 8, 9 #### Steps 7, 8, 9

View File

Before

Width:  |  Height:  |  Size: 122 KiB

After

Width:  |  Height:  |  Size: 122 KiB

View File

@ -0,0 +1,223 @@
# Controllers
This page describes how to write controllers in Consul's new controller architecture.
-> **Note**: This information is valid as of Consul 1.17 but some portions may change in future releases.
## Controller Basics
A controller consists of several parts:
1. **The watched type** - This is the main type a controller is watching and reconciling.
2. **Additional watched types** - These are additional types a controller may care about in addition to the main watched type.
3. **Additional custom watches** - These are the watches for things that aren't resources in Consul.
4. **Reconciler** - This is the instance that's responsible for reconciling requests whenever there's an event for the main watched type or for any of the watched types.
A basic controller setup could look like this:
```go
func barController() controller.Controller {
return controller.ForType(pbexample.BarType).
WithReconciler(barReconciler{})
}
```
barReconciler needs to implement the `Reconcile` method of the `Reconciler` interface.
It's important to note that the `Reconcile` method only gets the request with the `ID` of the main
watched resource and so it's up to the reconcile implementation to fetch the resource and any relevant information needed
to perform the reconciliation. The most basic reconciler could look as follows:
```go
type barReconciler struct {}
func (b *barReconciler) Reconcile(ctx context.Context, rt Runtime, req Request) error {
...
}
```
## Watching Additional Resources
Most of the time, controllers will need to watch more resources in addition to the main watched type.
To set up an additional watch, the main thing we need to figure out is how to map additional watched resource to the main
watched resource. Controller-runtime allows us to implement a mapper function that can take the additional watched resource
as the input and produce reconcile `Requests` for our main watched type.
To figure out how to map the two resources together, we need to think about the relationship between the two resources.
There are several common relationship types between resources that are being used currently:
1. Name-alignment: this relationship means that resources are named the same and live in the same tenancy, but have different data. Examples: `Service` and `ServiceEndpoints`, `Workload` and `ProxyStateTemplate`.
2. Selector: this relationship happens when one resource selects another by name or name prefix. Examples: `Service` and `Workload`, `ProxyConfiguration` and `Workload`.
3. Owner: in this relationship, one resource is the owner of another resource. Examples: `Service` and `ServiceEndpoints`, `HealthStatus` and `Workload`.
4. Arbitrary reference: in this relationship, one resource may reference another by some sort of reference. This reference could be a single string in the resource data or a more composite reference containing name, tenancy, and type. Examples: `Workload` and `WorkloadIdentity`, `HTTPRoute` and `Service`.
Note that it's possible for the two watched resources to have more than one relationship type simultaneously.
For example, `FailoverPolicy` type is name-aligned with a service to which it applies, however, it also contains
references to destination services, and for a controller that reconciles `FailoverPolicy` and watches `Service`
we need to account for both type 1 and type 4 relationship whenever we get an event for a `Service`.
### Simple Mappers
Let's look at some simple mapping examples.
#### Name-aligned resources
If our resources only have a name-aligned relationship, we can map them with a built-in function:
```go
func barController() controller.Controller {
return controller.ForType(pbexample.BarType).
WithWatch(pbexample.FooType, controller.ReplaceType(pbexample.BarType)).
WithReconciler(barReconciler{})
}
```
Here, all we need to do is replace the type of the `Foo` resource whenever we get an event for it.
#### Owned resources
Let's say our `Foo` resource owns `Bar` resources, where any `Foo` resource can own multiple `Bar` resources.
In this case, whenever we see a new event for `Foo`, all we need to do is get all `Bar` resources that `Foo` currently owns.
For this, we can also use a built-in function to set up our watch:
```go
func MapOwned(ctx context.Context, rt controller.Runtime, res *pbresource.Resource) ([]controller.Request, error) {
resp, err := rt.Client.ListByOwner(ctx, &pbresource.ListByOwnerRequest{Owner: res.Id})
if err != nil {
return nil, err
}
var result []controller.Request
for _, r := range resp.Resources {
result = append(result, controller.Request{ID: r.Id})
}
return result, nil
}
func barController() controller.Controller {
return controller.ForType(pbexample.BarType).
WithWatch(pbexample.FooType, MapOwned).
WithReconciler(barReconciler{})
}
```
### Advanced Mappers and Caches
For selector or arbitrary reference relationships, the mapping that we choose may need to be more advanced.
#### Naive mapper implementation
Let's first consider what a naive mapping function could look like in this case. Let's say that the `Bar` resource
references `Foo` resource by name in the data. Now to watch and map `Foo` resources, we need to be able to find all relevant `Bar` resources
whenever we get an event for a `Foo` resource.
```go
func MapFoo(ctx context.Context, rt controller.Runtime, res *pbresource.Resource) ([]controller.Request, error) {
resp, err := rt.Client.List(ctx, &pbresource.ListRequest{Type: pbexample.BarType, Tenancy: res.Id.Tenancy})
if err != nil {
return nil, err
}
var result []controller.Request
for _, r := range resp.Resources {
decodedResource, err := resource.Decode[*pbexample.Bar](r)
if err != nil {
return nil, err
}
// Only add Bar resources that match Foo by name.
if decodedResource.GetData().GetFooName() == res.Id.Name {
result = append(result, controller.Request{ID: r.Id})
}
}
}
```
This approach is fine for cases when the number of `Bar` resources in a cluster is relatively small. If it's not,
then we'd be doing a large `O(N)` search on each `Bar` event which could be too expensive.
#### Caching mappers
For cases when `N` is too large, we'd want to use a caching layer to help us make lookups more efficient so that they
don't require an `O(N)` search of potentially all cluster resources.
Caching mappers need to be kept up-to-date by individual controllers and because of their added complexity, it's important
to carefully consider whether these mappers are strictly necessary for any given controller implementation.
For reference-relationships, we recommend using the `bimapper` to track relationships, while for the workload selector relationships,
we recommend using the `workloadselectionmapper.Mapper` or the underlying `selectiontracker.WorkloadSelectionTracker`.
These two mappers types can be combined into more complex mappers such as the ones used by the `routes-controller`
or the `sidecar-proxy-controller`.
In our example, because we `Foo` and `Bar` are using name-reference relationship, we'll use a `bimapper`.
```go
func barController() controller.Controller {
mapper := bimapper.New(pbexample.Bar, pbexample.Foo)
return controller.ForType(pbexample.BarType).
WithWatch(pbexample.FooType, mapper.MapLink).
WithReconciler(barReconciler{mapper: mapper})
}
```
Now we need to make sure that we populate and clear the mapper as necessary. Generally, this should happen when the data
is fetched in the reconcile.
```go
func (b *barReconciler) Reconcile(ctx context.Context, rt Runtime, req Request) error {
// Fetch the `Bar` resource we're reconciling.
barResource, err := resource.GetDecodedResource[*pbexample.Bar](ctx, rt.Client, req.ID)
if err != nil {
return err
}
// If the resource is not found, we should make sure to untrack it from mapper.
if barResource == nil {
b.mapper.UntrackItem(req.ID)
}
// Fetch our referenced `Foo` resource.
fooID := &pbresource.ID{
Type: pbexample.FooType,
Name: barResource.GetData().GetFooName(),
Tenancy: req.Id.Tenancy,
}
res, err := resource.GetDecodedResource[*pbexample.Foo](ctx, rt.Client, fooID)
if err != nil {
return err
}
// If the referenced Foo resource is not found, we should not untrack it in case it comes back.
if res == nil {
// no-op
}
// Otherwise, we need to track it.
b.mapper.TrackItem(req.ID, []resource.ReferenceOrID{fooID})
}
```
TODO: bound ref problem
### Custom Watches
In some cases, we may want to trigger reconciles for events that aren't generated from CRUD operations on resources, for example
when Envoy proxy connects or disconnects to a server. Controller-runtime allows us to setup watches from
events that come from a custom event channel. Please see [xds-controller](https://github.com/hashicorp/consul/blob/ecfeb7aac51df8730064d869bb1f2c633a531522/internal/mesh/internal/controllers/xds/controller.go#L40-L41) for examples of custom watches.
## Statuses
In many cases, controllers would need to update statuses on resources to let the user know about the successful or unsuccessful
state of a resource.
These are the guidelines that we recommend for statuses:
* While status conditions is a list, the Condition type should be treated as a key in a map, meaning a resource should not have two status conditions with the same type.
* Controllers need to both update successful and unsuccessful conditions states. This is because we need to make sure that we clear any failed status conditions.
* Status conditions should be named such that the `True` state is a successful state and `False` state is a failed state.
## Best Practices
Below is a list of controller best practices that we've learned so far. Many of them are inspired by [kubebuilder](https://book.kubebuilder.io/reference/good-practices).
* Avoid monolithic controllers as much as possible. A single controller should only manage a single resource to avoid complexity and race conditions.
* If using cached mappers, aim to write (update or delete entries) to mappers in the `Reconcile` method and read from them in the mapper functions used by watches.
* Fetch all data in the `Reconcile` method and avoid caching it from the mapper functions. This ensures that we get the latest data for each reconciliation.

View File

@ -6,7 +6,7 @@ Consul 🚂
## Resource Schema ## Resource Schema
Adding a new resource type begins with defining the object schema as a protobuf Adding a new resource type begins with defining the object schema as a protobuf
message, in the appropriate package under [`proto-public`](../../proto-public). message, in the appropriate package under [`proto-public`](../../../proto-public).
```shell ```shell
$ mkdir proto-public/pbfoo/v1alpha1 $ mkdir proto-public/pbfoo/v1alpha1
@ -34,7 +34,7 @@ $ make proto
``` ```
Next, we must add our resource type to the registry. At this point, it's useful Next, we must add our resource type to the registry. At this point, it's useful
to add a package (e.g. under [`internal`](../../internal)) to contain the logic to add a package (e.g. under [`internal`](../../../internal)) to contain the logic
associated with this resource type. associated with this resource type.
The convention is to have this package export variables for its type identifiers The convention is to have this package export variables for its type identifiers
@ -65,7 +65,7 @@ and a partition.
Update the `NewTypeRegistry` method in [`type_registry.go`] to call your Update the `NewTypeRegistry` method in [`type_registry.go`] to call your
package's type registration method: package's type registration method:
[`type_registry.go`]: ../../agent/consul/type_registry.go [`type_registry.go`]: ../../../agent/consul/type_registry.go
```Go ```Go
import ( import (
@ -90,7 +90,7 @@ $ consul agent -dev
``` ```
You can now use [grpcurl](https://github.com/fullstorydev/grpcurl) to interact You can now use [grpcurl](https://github.com/fullstorydev/grpcurl) to interact
with the [resource service](../../proto-public/pbresource/resource.proto): with the [resource service](../../../proto-public/pbresource/resource.proto):
```shell ```shell
$ grpcurl -d @ \ $ grpcurl -d @ \
@ -283,7 +283,7 @@ Next, register your controller with the controller manager. Another common
pattern is to have your package expose a method for registering controllers, pattern is to have your package expose a method for registering controllers,
which is called from `registerControllers` in [`server.go`]. which is called from `registerControllers` in [`server.go`].
[`server.go`]: ../../agent/consul/server.go [`server.go`]: ../../../agent/consul/server.go
```Go ```Go
package foo package foo

View File

Before

Width:  |  Height:  |  Size: 300 KiB

After

Width:  |  Height:  |  Size: 300 KiB

View File

@ -0,0 +1,47 @@
# V2 Service Mesh Architecture
In Consul 1.16 and 1.17 releases, Consul's service mesh has been rewritten to use the controller architecture and the
resource APIs.
At a high level, the service mesh consists of resources and controllers living in three groups: `catalog`, `mesh`,
and `auth`.
![controllers diagram](controllers.png)
The controllers in each groups are responsible for producing an output that may then be used by other controllers.
-> **Note:** This diagram is valid as of Consul 1.17. It may change in the future releases.
## Catalog controllers
Catalog controllers are responsible for reconciling resources in the `catalog` API group.
1. **FailoverPolicy** controller validates `FailoverPolicy` resource has valid service references and updating the
status the `FailoverPolicy` resource with the result.
2. **WorkloadHealth** controller takes in workloads and any relevant health statuses and updates the status of a
workload with the combined health status.
3. **NodeHealth** controller takes in nodes and any relevant health statuses and updates the status of a node with the
combined health status.
4. **ServiceEndpoints** controller generates a `ServiceEndpoints` object that is name-aligned with a service and
contains workload addresses and ports that constitute a service.
## Mesh Controllers
1. **ProxyConfiguration** controller generates a `ComputedProxyConfiguration` resource that is name-aligned with a
workload. `ComputedProxyConfiguration` contains all merged `ProxyConfiguration` resources that apply to a specific
workload.
2. **Routes** controller generates a `ComputedRoutes` resource that is name-aligned with a service. It contains merged
configuration from all xRoutes objects as well as `FailoverPolicy` and `DestinationPolicy`.
3. **ExplicitDestinations** controller generates a `ComputedExplicitDestinations` resource that is name-aligned with a
workload. It contains merged `Destinations` resources that apply to a specific workload.
4. **SidecarProxy** controller takes in the results of the previous three controllers as well as some user-provided
resources and generates `ProxyStateTemplate` resource which serves as the representation of Envoy configuration for
sidecar proxies.
5. **XDSController** takes in `ProxyStateTemplate` resources, fills in missing endpoints references as well as
certificates and CA roots and sends them over to another component that sends this information over to Envoy.
## Auth Controllers
1. **TrafficPermissions** controller generates `ComputedTrafficPermissions` resource that is name-aligned
with `WorkloadIdentity`. This computed resource contains all traffic permissions that apply to a specific workload
identity.

Binary file not shown.

After

Width:  |  Height:  |  Size: 468 KiB