* docs: Link to compatibility matrix for imageEnvoy
Added a link to the Envoy supported version in the documentation for `imageEnvoy` parameter.
* Update website/source/docs/platform/k8s/helm.html.md
* Enable filtering language support for the v1/connect/intentions listing API
* Update website for filtering of Intentions
* Update website/source/api/connect/intentions.html.md
* Change style to match "join" singular
- Replaced "(Consul) cluster" with "Consul Datacenter"
- Removed "ing" so the feature fits "Consul Auto-join", and that the tense is correct.
Co-authored-by: danielehc <40759828+danielehc@users.noreply.github.com>
* [docs] Built-in Proxies not meant for production
* Adding link to Envoy for Connect
* Update website/source/docs/connect/proxies/built-in.md
Co-Authored-By: Blake Covarrubias <blake@covarrubi.as>
* Revising note
* Update website/source/docs/connect/proxies/built-in.md
period
Co-Authored-By: Hans Hasselberg <me@hans.io>
Co-authored-by: Blake Covarrubias <blake@covarrubi.as>
Co-authored-by: Hans Hasselberg <me@hans.io>
* Add ACL CLI commands output format option.
Add command level formatter, that incapsulates command output printing
logiс that depends on the command `-format` option.
Move Print* functions from acl_helpers to prettyFormatter. Add jsonFormatter.
* Return error code in case of formatting failure.
* Add acl commands -format option to doc.
This is like a Möbius strip of code due to the fact that low-level components (serf/memberlist) are connected to high-level components (the catalog and mesh-gateways) in a twisty maze of references which make it hard to dive into. With that in mind here's a high level summary of what you'll find in the patch:
There are several distinct chunks of code that are affected:
* new flags and config options for the server
* retry join WAN is slightly different
* retry join code is shared to discover primary mesh gateways from secondary datacenters
* because retry join logic runs in the *agent* and the results of that
operation for primary mesh gateways are needed in the *server* there are
some methods like `RefreshPrimaryGatewayFallbackAddresses` that must occur
at multiple layers of abstraction just to pass the data down to the right
layer.
* new cache type `FederationStateListMeshGatewaysName` for use in `proxycfg/xds` layers
* the function signature for RPC dialing picked up a new required field (the
node name of the destination)
* several new RPCs for manipulating a FederationState object:
`FederationState:{Apply,Get,List,ListMeshGateways}`
* 3 read-only internal APIs for debugging use to invoke those RPCs from curl
* raft and fsm changes to persist these FederationStates
* replication for FederationStates as they are canonically stored in the
Primary and replicated to the Secondaries.
* a special derivative of anti-entropy that runs in secondaries to snapshot
their local mesh gateway `CheckServiceNodes` and sync them into their upstream
FederationState in the primary (this works in conjunction with the
replication to distribute addresses for all mesh gateways in all DCs to all
other DCs)
* a "gateway locator" convenience object to make use of this data to choose
the addresses of gateways to use for any given RPC or gossip operation to a
remote DC. This gets data from the "retry join" logic in the agent and also
directly calls into the FSM.
* RPC (`:8300`) on the server sniffs the first byte of a new connection to
determine if it's actually doing native TLS. If so it checks the ALPN header
for protocol determination (just like how the existing system uses the
type-byte marker).
* 2 new kinds of protocols are exclusively decoded via this native TLS
mechanism: one for ferrying "packet" operations (udp-like) from the gossip
layer and one for "stream" operations (tcp-like). The packet operations
re-use sockets (using length-prefixing) to cut down on TLS re-negotiation
overhead.
* the server instances specially wrap the `memberlist.NetTransport` when running
with gateway federation enabled (in a `wanfed.Transport`). The general gist is
that if it tries to dial a node in the SAME datacenter (deduced by looking
at the suffix of the node name) there is no change. If dialing a DIFFERENT
datacenter it is wrapped up in a TLS+ALPN blob and sent through some mesh
gateways to eventually end up in a server's :8300 port.
* a new flag when launching a mesh gateway via `consul connect envoy` to
indicate that the servers are to be exposed. This sets a special service
meta when registering the gateway into the catalog.
* `proxycfg/xds` notice this metadata blob to activate additional watches for
the FederationState objects as well as the location of all of the consul
servers in that datacenter.
* `xds:` if the extra metadata is in place additional clusters are defined in a
DC to bulk sink all traffic to another DC's gateways. For the current
datacenter we listen on a wildcard name (`server.<dc>.consul`) that load
balances all servers as well as one mini-cluster per node
(`<node>.server.<dc>.consul`)
* the `consul tls cert create` command got a new flag (`-node`) to help create
an additional SAN in certs that can be used with this flavor of federation.
The Session API in Consul 1.7.0 and 1.7.1 is incompatible with prior versions of Consul.
This PR adds a note to our version-specific upgrade guide to guard against users upgrading before the fix in 1.7.2 is released.
The go-discover library supports Linode. This adds support for
discovering other Consul agents running on Linode. Consul has supported
this since [66b8c20][1] was merged, so this commit just updates the
documentation to match current features.
[1]: 66b8c20990
* Remove trailing whitespace in DNS forwarding guide.
* Add example for enabling reverse lookup of IP addrseses to .consul domain on systemd-resolved platforms