consul/agent/structs/discovery_chain.go

263 lines
7.7 KiB
Go
Raw Normal View History

package structs
import (
"encoding/json"
"fmt"
"time"
"github.com/hashicorp/consul/lib"
)
// CompiledDiscoveryChain is the result from taking a set of related config
// entries for a single service's discovery chain and restructuring them into a
// form that is more usable for actual service discovery.
type CompiledDiscoveryChain struct {
ServiceName string
Namespace string // the namespace that the chain was compiled within
Datacenter string // the datacenter that the chain was compiled within
connect: reconcile how upstream configuration works with discovery chains (#6225) * connect: reconcile how upstream configuration works with discovery chains The following upstream config fields for connect sidecars sanely integrate into discovery chain resolution: - Destination Namespace/Datacenter: Compilation occurs locally but using different default values for namespaces and datacenters. The xDS clusters that are created are named as they normally would be. - Mesh Gateway Mode (single upstream): If set this value overrides any value computed for any resolver for the entire discovery chain. The xDS clusters that are created may be named differently (see below). - Mesh Gateway Mode (whole sidecar): If set this value overrides any value computed for any resolver for the entire discovery chain. If this is specifically overridden for a single upstream this value is ignored in that case. The xDS clusters that are created may be named differently (see below). - Protocol (in opaque config): If set this value overrides the value computed when evaluating the entire discovery chain. If the normal chain would be TCP or if this override is set to TCP then the result is that we explicitly disable L7 Routing and Splitting. The xDS clusters that are created may be named differently (see below). - Connect Timeout (in opaque config): If set this value overrides the value for any resolver in the entire discovery chain. The xDS clusters that are created may be named differently (see below). If any of the above overrides affect the actual result of compiling the discovery chain (i.e. "tcp" becomes "grpc" instead of being a no-op override to "tcp") then the relevant parameters are hashed and provided to the xDS layer as a prefix for use in naming the Clusters. This is to ensure that if one Upstream discovery chain has no overrides and tangentially needs a cluster named "api.default.XXX", and another Upstream does have overrides for "api.default.XXX" that they won't cross-pollinate against the operator's wishes. Fixes #6159
2019-08-02 03:03:34 +00:00
// CustomizationHash is a unique hash of any data that affects the
// compilation of the discovery chain other than config entries or the
// name/namespace/datacenter evaluation criteria.
//
// If set, this value should be used to prefix/suffix any generated load
// balancer data plane objects to avoid sharing customized and
// non-customized versions.
CustomizationHash string `json:",omitempty"`
connect: reconcile how upstream configuration works with discovery chains (#6225) * connect: reconcile how upstream configuration works with discovery chains The following upstream config fields for connect sidecars sanely integrate into discovery chain resolution: - Destination Namespace/Datacenter: Compilation occurs locally but using different default values for namespaces and datacenters. The xDS clusters that are created are named as they normally would be. - Mesh Gateway Mode (single upstream): If set this value overrides any value computed for any resolver for the entire discovery chain. The xDS clusters that are created may be named differently (see below). - Mesh Gateway Mode (whole sidecar): If set this value overrides any value computed for any resolver for the entire discovery chain. If this is specifically overridden for a single upstream this value is ignored in that case. The xDS clusters that are created may be named differently (see below). - Protocol (in opaque config): If set this value overrides the value computed when evaluating the entire discovery chain. If the normal chain would be TCP or if this override is set to TCP then the result is that we explicitly disable L7 Routing and Splitting. The xDS clusters that are created may be named differently (see below). - Connect Timeout (in opaque config): If set this value overrides the value for any resolver in the entire discovery chain. The xDS clusters that are created may be named differently (see below). If any of the above overrides affect the actual result of compiling the discovery chain (i.e. "tcp" becomes "grpc" instead of being a no-op override to "tcp") then the relevant parameters are hashed and provided to the xDS layer as a prefix for use in naming the Clusters. This is to ensure that if one Upstream discovery chain has no overrides and tangentially needs a cluster named "api.default.XXX", and another Upstream does have overrides for "api.default.XXX" that they won't cross-pollinate against the operator's wishes. Fixes #6159
2019-08-02 03:03:34 +00:00
// Protocol is the overall protocol shared by everything in the chain.
Protocol string `json:",omitempty"`
// StartNode is the first key into the Nodes map that should be followed
// when walking the discovery chain.
StartNode string `json:",omitempty"`
// Nodes contains all nodes available for traversal in the chain keyed by a
// unique name. You can walk this by starting with StartNode.
//
// NOTE: The names should be treated as opaque values and are only
// guaranteed to be consistent within a single compilation.
Nodes map[string]*DiscoveryGraphNode `json:",omitempty"`
// Targets is a list of all targets used in this chain.
Targets map[string]*DiscoveryTarget `json:",omitempty"`
}
connect: fix failover through a mesh gateway to a remote datacenter (#6259) Failover is pushed entirely down to the data plane by creating envoy clusters and putting each successive destination in a different load assignment priority band. For example this shows that normally requests go to 1.2.3.4:8080 but when that fails they go to 6.7.8.9:8080: - name: foo load_assignment: cluster_name: foo policy: overprovisioning_factor: 100000 endpoints: - priority: 0 lb_endpoints: - endpoint: address: socket_address: address: 1.2.3.4 port_value: 8080 - priority: 1 lb_endpoints: - endpoint: address: socket_address: address: 6.7.8.9 port_value: 8080 Mesh gateways route requests based solely on the SNI header tacked onto the TLS layer. Envoy currently only lets you configure the outbound SNI header at the cluster layer. If you try to failover through a mesh gateway you ideally would configure the SNI value per endpoint, but that's not possible in envoy today. This PR introduces a simpler way around the problem for now: 1. We identify any target of failover that will use mesh gateway mode local or remote and then further isolate any resolver node in the compiled discovery chain that has a failover destination set to one of those targets. 2. For each of these resolvers we will perform a small measurement of comparative healths of the endpoints that come back from the health API for the set of primary target and serial failover targets. We walk the list of targets in order and if any endpoint is healthy we return that target, otherwise we move on to the next target. 3. The CDS and EDS endpoints both perform the measurements in (2) for the affected resolver nodes. 4. For CDS this measurement selects which TLS SNI field to use for the cluster (note the cluster is always going to be named for the primary target) 5. For EDS this measurement selects which set of endpoints will populate the cluster. Priority tiered failover is ignored. One of the big downsides to this approach to failover is that the failover detection and correction is going to be controlled by consul rather than deferring that entirely to the data plane as with the prior version. This also means that we are bound to only failover using official health signals and cannot make use of data plane signals like outlier detection to affect failover. In this specific scenario the lack of data plane signals is ok because the effectiveness is already muted by the fact that the ultimate destination endpoints will have their data plane signals scrambled when they pass through the mesh gateway wrapper anyway so we're not losing much. Another related fix is that we now use the endpoint health from the underlying service, not the health of the gateway (regardless of failover mode).
2019-08-05 18:30:35 +00:00
func (c *CompiledDiscoveryChain) WillFailoverThroughMeshGateway(node *DiscoveryGraphNode) bool {
if node.Type != DiscoveryGraphNodeTypeResolver {
return false
}
failover := node.Resolver.Failover
if failover != nil && len(failover.Targets) > 0 {
for _, failTargetID := range failover.Targets {
failTarget := c.Targets[failTargetID]
switch failTarget.MeshGateway.Mode {
case MeshGatewayModeLocal, MeshGatewayModeRemote:
return true
}
}
}
return false
}
// IsDefault returns true if the compiled chain represents no routing, no
// splitting, and only the default resolution. We have to be careful here to
// avoid returning "yep this is default" when the only resolver action being
// applied is redirection to another resolver that is default, so we double
// check the resolver matches the requested resolver.
func (c *CompiledDiscoveryChain) IsDefault() bool {
if c.StartNode == "" || len(c.Nodes) == 0 {
return true
}
node := c.Nodes[c.StartNode]
if node == nil {
panic("not possible: missing node named '" + c.StartNode + "' in chain '" + c.ServiceName + "'")
}
if node.Type != DiscoveryGraphNodeTypeResolver {
return false
}
if !node.Resolver.Default {
return false
}
target := c.Targets[node.Resolver.Target]
return target.Service == c.ServiceName && target.Namespace == c.Namespace
}
// ID returns an ID that encodes the service, namespace, and datacenter.
// This ID allows us to compare a discovery chain target to the chain upstream itself.
func (c *CompiledDiscoveryChain) ID() string {
return chainID("", c.ServiceName, c.Namespace, c.Datacenter)
}
func (c *CompiledDiscoveryChain) CompoundServiceName() ServiceName {
entMeta := NewEnterpriseMetaInDefaultPartition(c.Namespace)
return NewServiceName(c.ServiceName, &entMeta)
}
const (
DiscoveryGraphNodeTypeRouter = "router"
DiscoveryGraphNodeTypeSplitter = "splitter"
DiscoveryGraphNodeTypeResolver = "resolver"
)
// DiscoveryGraphNode is a single node in the compiled discovery chain.
type DiscoveryGraphNode struct {
Type string
Name string // this is NOT necessarily a service
// fields for Type==router
Routes []*DiscoveryRoute `json:",omitempty"`
// fields for Type==splitter
Splits []*DiscoverySplit `json:",omitempty"`
// fields for Type==resolver
Resolver *DiscoveryResolver `json:",omitempty"`
// shared by Type==resolver || Type==splitter
LoadBalancer *LoadBalancer `json:",omitempty"`
}
func (s *DiscoveryGraphNode) IsRouter() bool {
return s.Type == DiscoveryGraphNodeTypeRouter
}
func (s *DiscoveryGraphNode) IsSplitter() bool {
return s.Type == DiscoveryGraphNodeTypeSplitter
}
func (s *DiscoveryGraphNode) IsResolver() bool {
return s.Type == DiscoveryGraphNodeTypeResolver
}
func (s *DiscoveryGraphNode) MapKey() string {
return fmt.Sprintf("%s:%s", s.Type, s.Name)
}
// compiled form of ServiceResolverConfigEntry
type DiscoveryResolver struct {
Default bool `json:",omitempty"`
ConnectTimeout time.Duration `json:",omitempty"`
Target string `json:",omitempty"`
Failover *DiscoveryFailover `json:",omitempty"`
}
func (r *DiscoveryResolver) MarshalJSON() ([]byte, error) {
type Alias DiscoveryResolver
exported := &struct {
ConnectTimeout string `json:",omitempty"`
*Alias
}{
ConnectTimeout: r.ConnectTimeout.String(),
Alias: (*Alias)(r),
}
if r.ConnectTimeout == 0 {
exported.ConnectTimeout = ""
}
return json.Marshal(exported)
}
func (r *DiscoveryResolver) UnmarshalJSON(data []byte) error {
type Alias DiscoveryResolver
aux := &struct {
ConnectTimeout string
*Alias
}{
Alias: (*Alias)(r),
}
if err := lib.UnmarshalJSON(data, &aux); err != nil {
return err
}
var err error
if aux.ConnectTimeout != "" {
if r.ConnectTimeout, err = time.ParseDuration(aux.ConnectTimeout); err != nil {
return err
}
}
return nil
}
// compiled form of ServiceRoute
type DiscoveryRoute struct {
Definition *ServiceRoute `json:",omitempty"`
NextNode string `json:",omitempty"`
}
// compiled form of ServiceSplit
type DiscoverySplit struct {
Weight float32 `json:",omitempty"`
NextNode string `json:",omitempty"`
}
// compiled form of ServiceResolverFailover
type DiscoveryFailover struct {
Targets []string `json:",omitempty"`
}
// DiscoveryTarget represents all of the inputs necessary to use a resolver
// config entry to execute a catalog query to generate a list of service
// instances during discovery.
type DiscoveryTarget struct {
// ID is a unique identifier for referring to this target in a compiled
// chain. It should be treated as a per-compile opaque string.
ID string `json:",omitempty"`
Service string `json:",omitempty"`
ServiceSubset string `json:",omitempty"`
Namespace string `json:",omitempty"`
Partition string `json:",omitempty"`
Datacenter string `json:",omitempty"`
MeshGateway MeshGatewayConfig `json:",omitempty"`
Subset ServiceResolverSubset `json:",omitempty"`
// External is true if this target is outside of this consul cluster.
External bool `json:",omitempty"`
// SNI is the sni field to use when connecting to this set of endpoints
// over TLS.
SNI string `json:",omitempty"`
// Name is the unique name for this target for use when generating load
// balancer objects. This has a structure similar to SNI, but will not be
// affected by SNI customizations.
Name string `json:",omitempty"`
}
func NewDiscoveryTarget(service, serviceSubset, namespace, datacenter string) *DiscoveryTarget {
t := &DiscoveryTarget{
Service: service,
ServiceSubset: serviceSubset,
Namespace: namespace,
Datacenter: datacenter,
}
t.setID()
return t
}
func chainID(subset, service, namespace, dc string) string {
// NOTE: this format is similar to the SNI syntax for simplicity
if subset == "" {
return fmt.Sprintf("%s.%s.%s", service, namespace, dc)
}
return fmt.Sprintf("%s.%s.%s.%s", subset, service, namespace, dc)
}
func (t *DiscoveryTarget) setID() {
t.ID = chainID(t.ServiceSubset, t.Service, t.Namespace, t.Datacenter)
}
func (t *DiscoveryTarget) String() string {
return t.ID
}
func (t *DiscoveryTarget) ServiceID() ServiceID {
return NewServiceID(t.Service, t.GetEnterpriseMetadata())
}