anchor link fixes across a lot of pages

This commit is contained in:
Jeff Escalante 2020-04-08 20:09:01 -04:00
parent 711352bcf1
commit c23cda3389
No known key found for this signature in database
GPG Key ID: 32D23C61AB5450DB
25 changed files with 852 additions and 1802 deletions

View File

@ -10,10 +10,10 @@ description: >-
them. It is very similar to AWS IAM in many ways. them. It is very similar to AWS IAM in many ways.
--- ---
-> **1.3.0 and earlier:** This document only applies in Consul versions 1.3.0 and before. If you are using version 1.4.0 or later please use the updated documentation [here](/docs/acl/acl-system.html)
# ACL System in Legacy Mode # ACL System in Legacy Mode
-> **1.3.0 and earlier:** This document only applies in Consul versions 1.3.0 and before. If you are using version 1.4.0 or later please use the updated documentation [here](/docs/acl/acl-system.html)
~> **Alert: Deprecation Notice** ~> **Alert: Deprecation Notice**
The ACL system described here was Consul's original ACL implementation. In Consul 1.4.0 The ACL system described here was Consul's original ACL implementation. In Consul 1.4.0
the ACL system was rewritten and the legacy system was deprecated. The new ACL system information can be found [here](/docs/acl/acl-system.html). the ACL system was rewritten and the legacy system was deprecated. The new ACL system information can be found [here](/docs/acl/acl-system.html).
@ -327,7 +327,7 @@ Once the ACL system is bootstrapped, ACL tokens can be managed through the
After the servers are restarted above, you will see new errors in the logs of the Consul After the servers are restarted above, you will see new errors in the logs of the Consul
servers related to permission denied errors: servers related to permission denied errors:
``` ```text
2017/07/08 23:38:24 [WARN] agent: Node info update blocked by ACLs 2017/07/08 23:38:24 [WARN] agent: Node info update blocked by ACLs
2017/07/08 23:38:44 [WARN] agent: Coordinate update blocked by ACLs 2017/07/08 23:38:44 [WARN] agent: Coordinate update blocked by ACLs
``` ```
@ -381,7 +381,7 @@ $ curl \
With that ACL agent token set, the servers will be able to sync themselves with the With that ACL agent token set, the servers will be able to sync themselves with the
catalog: catalog:
``` ```text
2017/07/08 23:42:59 [INFO] agent: Synced node info 2017/07/08 23:42:59 [INFO] agent: Synced node info
``` ```
@ -482,7 +482,7 @@ node-2 127.0.0.2:8301 alive client 0.9.0dev 2 dc1
The anonymous token is also used for DNS lookups since there's no way to pass a The anonymous token is also used for DNS lookups since there's no way to pass a
token as part of a DNS request. Here's an example lookup for the "consul" service: token as part of a DNS request. Here's an example lookup for the "consul" service:
``` ```shell
$ dig @127.0.0.1 -p 8600 consul.service.consul $ dig @127.0.0.1 -p 8600 consul.service.consul
; <<>> DiG 9.8.3-P1 <<>> @127.0.0.1 -p 8600 consul.service.consul ; <<>> DiG 9.8.3-P1 <<>> @127.0.0.1 -p 8600 consul.service.consul
@ -524,7 +524,7 @@ $ curl \
With that new policy in place, the DNS lookup will succeed: With that new policy in place, the DNS lookup will succeed:
``` ```shell
$ dig @127.0.0.1 -p 8600 consul.service.consul $ dig @127.0.0.1 -p 8600 consul.service.consul
; <<>> DiG 9.8.3-P1 <<>> @127.0.0.1 -p 8600 consul.service.consul ; <<>> DiG 9.8.3-P1 <<>> @127.0.0.1 -p 8600 consul.service.consul
@ -1082,9 +1082,7 @@ name that starts with "admin".
## Advanced Topics ## Advanced Topics
<a name="replication"></a> #### Outages and ACL Replication ((#replication))
#### Outages and ACL Replication
The Consul ACL system is designed with flexible rules to accommodate for an outage The Consul ACL system is designed with flexible rules to accommodate for an outage
of the [`acl_datacenter`](/docs/agent/options.html#acl_datacenter) or networking of the [`acl_datacenter`](/docs/agent/options.html#acl_datacenter) or networking
@ -1146,9 +1144,7 @@ using a process like this:
4. Rolling restart the agents in other datacenters and change their `acl_datacenter` 4. Rolling restart the agents in other datacenters and change their `acl_datacenter`
configuration to the target datacenter. configuration to the target datacenter.
<a name="version_8_acls"></a> #### Complete ACL Coverage in Consul 0.8 ((#version_8_acls))
#### Complete ACL Coverage in Consul 0.8
Consul 0.8 added many more ACL policy types and brought ACL enforcement to Consul Consul 0.8 added many more ACL policy types and brought ACL enforcement to Consul
agents for the first time. To ease the transition to Consul 0.8 for existing ACL agents for the first time. To ease the transition to Consul 0.8 for existing ACL

View File

@ -10,10 +10,10 @@ description: >-
them. It is very similar to AWS IAM in many ways. them. It is very similar to AWS IAM in many ways.
--- ---
-> **1.4.0 and later:** This document only applies in Consul versions 1.4.0 and later. The documentation for the legacy ACL system is [here](/docs/acl/acl-legacy.html)
# ACL Rules # ACL Rules
-> **1.4.0 and later:** This document only applies in Consul versions 1.4.0 and later. The documentation for the legacy ACL system is [here](/docs/acl/acl-legacy.html)
Consul provides an optional Access Control List (ACL) system which can be used Consul provides an optional Access Control List (ACL) system which can be used
to control access to data and APIs. To learn more about Consul's ACL review the to control access to data and APIs. To learn more about Consul's ACL review the
[ACL system documentation](/docs/acl/acl-system.html) [ACL system documentation](/docs/acl/acl-system.html)

View File

@ -10,10 +10,10 @@ description: >-
them. It is very similar to AWS IAM in many ways. them. It is very similar to AWS IAM in many ways.
--- ---
-> **1.4.0 and later:** This guide only applies in Consul versions 1.4.0 and later. The documentation for the legacy ACL system is [here](/docs/acl/acl-legacy.html)
# ACL System # ACL System
-> **1.4.0 and later:** This guide only applies in Consul versions 1.4.0 and later. The documentation for the legacy ACL system is [here](/docs/acl/acl-legacy.html)
Consul provides an optional Access Control List (ACL) system which can be used to control access to data and APIs. Consul provides an optional Access Control List (ACL) system which can be used to control access to data and APIs.
The ACL is [Capability-based](https://en.wikipedia.org/wiki/Capability-based_security), relying on tokens which The ACL is [Capability-based](https://en.wikipedia.org/wiki/Capability-based_security), relying on tokens which
are associated with policies to determine which fine grained rules can be applied. Consul's capability based are associated with policies to determine which fine grained rules can be applied. Consul's capability based
@ -101,7 +101,7 @@ applied as a policy with the following preconfigured [ACL
rules](/docs/acl/acl-system.html#acl-rules-and-scope): rules](/docs/acl/acl-system.html#acl-rules-and-scope):
```hcl ```hcl
// Allow the service and its sidecar proxy to register into the catalog. # Allow the service and its sidecar proxy to register into the catalog.
service "<Service Name>" { service "<Service Name>" {
policy = "write" policy = "write"
} }
@ -109,7 +109,7 @@ service "<Service Name>-sidecar-proxy" {
policy = "write" policy = "write"
} }
// Allow for any potential upstreams to be resolved. # Allow for any potential upstreams to be resolved.
service_prefix "" { service_prefix "" {
policy = "read" policy = "read"
} }

View File

@ -9,10 +9,10 @@ description: >-
within the local datacenter. within the local datacenter.
--- ---
-> **1.5.0+:** This guide only applies in Consul versions 1.5.0 and newer.
# ACL Auth Methods # ACL Auth Methods
-> **1.5.0+:** This guide only applies in Consul versions 1.5.0 and newer.
An auth method is a component in Consul that performs authentication against a An auth method is a component in Consul that performs authentication against a
trusted external party to authorize the creation of an ACL tokens usable within trusted external party to authorize the creation of an ACL tokens usable within
the local datacenter. the local datacenter.

View File

@ -48,7 +48,7 @@ There are several different kinds of checks:
blog post](https://www.hashicorp.com/blog/protecting-consul-from-rce-risk-in-specific-configurations) blog post](https://www.hashicorp.com/blog/protecting-consul-from-rce-risk-in-specific-configurations)
for more details. for more details.
- HTTP + Interval - These checks make an HTTP `GET` request to the specified URL, - `HTTP + Interval` - These checks make an HTTP `GET` request to the specified URL,
waiting the specified `interval` amount of time between requests (eg. 30 seconds). waiting the specified `interval` amount of time between requests (eg. 30 seconds).
The status of the service depends on the HTTP response code: any `2xx` code is The status of the service depends on the HTTP response code: any `2xx` code is
considered passing, a `429 Too ManyRequests` is a warning, and anything else is considered passing, a `429 Too ManyRequests` is a warning, and anything else is
@ -66,7 +66,7 @@ There are several different kinds of checks:
Certificate verification can be turned off by setting the `tls_skip_verify` Certificate verification can be turned off by setting the `tls_skip_verify`
field to `true` in the check definition. field to `true` in the check definition.
- TCP + Interval - These checks make a TCP connection attempt to the specified - `TCP + Interval` - These checks make a TCP connection attempt to the specified
IP/hostname and port, waiting `interval` amount of time between attempts IP/hostname and port, waiting `interval` amount of time between attempts
(e.g. 30 seconds). If no hostname (e.g. 30 seconds). If no hostname
is specified, it defaults to "localhost". The status of the service depends on is specified, it defaults to "localhost". The status of the service depends on
@ -81,7 +81,7 @@ There are several different kinds of checks:
It is possible to configure a custom TCP check timeout value by specifying the It is possible to configure a custom TCP check timeout value by specifying the
`timeout` field in the check definition. `timeout` field in the check definition.
- <a name="TTL"></a>Time to Live (TTL) - These checks retain their last known state - `Time to Live (TTL)` ((#ttl)) - These checks retain their last known state
for a given TTL. The state of the check must be updated periodically over the HTTP for a given TTL. The state of the check must be updated periodically over the HTTP
interface. If an external system fails to update the status within a given TTL, interface. If an external system fails to update the status within a given TTL,
the check is set to the failed state. This mechanism, conceptually similar to a the check is set to the failed state. This mechanism, conceptually similar to a
@ -95,7 +95,7 @@ There are several different kinds of checks:
status of the check across restarts. Persisted check status is valid through the status of the check across restarts. Persisted check status is valid through the
end of the TTL from the time of the last check. end of the TTL from the time of the last check.
- Docker + Interval - These checks depend on invoking an external application which - `Docker + Interval` - These checks depend on invoking an external application which
is packaged within a Docker Container. The application is triggered within the running is packaged within a Docker Container. The application is triggered within the running
container via the Docker Exec API. We expect that the Consul agent user has access container via the Docker Exec API. We expect that the Consul agent user has access
to either the Docker HTTP API or the unix socket. Consul uses `$DOCKER_HOST` to to either the Docker HTTP API or the unix socket. Consul uses `$DOCKER_HOST` to
@ -108,7 +108,7 @@ There are several different kinds of checks:
must be configured with [`enable_script_checks`](/docs/agent/options.html#_enable_script_checks) must be configured with [`enable_script_checks`](/docs/agent/options.html#_enable_script_checks)
set to `true` in order to enable Docker health checks. set to `true` in order to enable Docker health checks.
- gRPC + Interval - These checks are intended for applications that support the standard - `gRPC + Interval` - These checks are intended for applications that support the standard
[gRPC health checking protocol](https://github.com/grpc/grpc/blob/master/doc/health-checking.md). [gRPC health checking protocol](https://github.com/grpc/grpc/blob/master/doc/health-checking.md).
The state of the check will be updated by probing the configured endpoint, waiting `interval` The state of the check will be updated by probing the configured endpoint, waiting `interval`
amount of time between probes (eg. 30 seconds). By default, gRPC checks will be configured amount of time between probes (eg. 30 seconds). By default, gRPC checks will be configured
@ -120,7 +120,7 @@ There are several different kinds of checks:
`tls_skip_verify` field to `true` in the check definition. `tls_skip_verify` field to `true` in the check definition.
To check on a specific service instead of the whole gRPC server, add the service identifier after the `gRPC` check's endpoint in the following format `/:service_identifier`. To check on a specific service instead of the whole gRPC server, add the service identifier after the `gRPC` check's endpoint in the following format `/:service_identifier`.
- <a name="alias"></a>Alias - These checks alias the health state of another registered - `Alias` - These checks alias the health state of another registered
node or service. The state of the check will be updated asynchronously, but is node or service. The state of the check will be updated asynchronously, but is
nearly instant. For aliased services on the same agent, the local state is monitored nearly instant. For aliased services on the same agent, the local state is monitored
and no additional network resources are consumed. For other services and nodes, and no additional network resources are consumed. For other services and nodes,

View File

@ -8,10 +8,10 @@ description: >-
should satisfy Connect upstream discovery requests for a given service name. should satisfy Connect upstream discovery requests for a given service name.
--- ---
-> **1.6.0+:** This config entry is available in Consul versions 1.6.0 and newer.
# Service Resolver # Service Resolver
-> **1.6.0+:** This config entry is available in Consul versions 1.6.0 and newer.
The `service-resolver` config entry kind controls which service instances The `service-resolver` config entry kind controls which service instances
should satisfy Connect upstream discovery requests for a given service name. should satisfy Connect upstream discovery requests for a given service name.

View File

@ -8,10 +8,10 @@ description: >-
manipulation at networking layer 7 (e.g. HTTP). manipulation at networking layer 7 (e.g. HTTP).
--- ---
-> **1.6.0+:** This config entry is available in Consul versions 1.6.0 and newer.
# Service Router # Service Router
-> **1.6.0+:** This config entry is available in Consul versions 1.6.0 and newer.
The `service-router` config entry kind controls Connect traffic routing and The `service-router` config entry kind controls Connect traffic routing and
manipulation at networking layer 7 (e.g. HTTP). manipulation at networking layer 7 (e.g. HTTP).

View File

@ -10,10 +10,10 @@ description: >-
rewrite or other type of codebase migration). rewrite or other type of codebase migration).
--- ---
-> **1.6.0+:** This config entry is available in Consul versions 1.6.0 and newer.
# Service Splitter # Service Splitter
-> **1.6.0+:** This config entry is available in Consul versions 1.6.0 and newer.
The `service-splitter` config entry kind controls how to split incoming Connect The `service-splitter` config entry kind controls how to split incoming Connect
requests across different subsets of a single service (like during staged requests across different subsets of a single service (like during staged
canary rollouts), or perhaps across different services (like during a v2 canary rollouts), or perhaps across different services (like during a v2

File diff suppressed because it is too large Load Diff

View File

@ -54,7 +54,7 @@ Anything written to stdout is logged.
Here is an example configuration, where `handler_type` is optionally set to Here is an example configuration, where `handler_type` is optionally set to
`script`: `script`:
```javascript ```json
{ {
"type": "key", "type": "key",
"key": "foo/bar/baz", "key": "foo/bar/baz",
@ -82,15 +82,15 @@ always sent as a JSON payload.
Here is an example configuration: Here is an example configuration:
```javascript ```json
{ {
"type": "key", "type": "key",
"key": "foo/bar/baz", "key": "foo/bar/baz",
"handler_type": "http", "handler_type": "http",
"http_handler_config": { "http_handler_config": {
"path":"https://localhost:8000/watch", "path": "https://localhost:8000/watch",
"method": "POST", "method": "POST",
"header": {"x-foo":["bar", "baz"]}, "header": { "x-foo": ["bar", "baz"] },
"timeout": "10s", "timeout": "10s",
"tls_skip_verify": false "tls_skip_verify": false
} }
@ -119,7 +119,7 @@ The following types are supported. Detailed documentation on each is below:
- [`checks`](#checks) - Watch the value of health checks - [`checks`](#checks) - Watch the value of health checks
- [`event`](#event) - Watch for custom user events - [`event`](#event) - Watch for custom user events
### <a name="key"></a>Type: key ### Type: key ((#key))
The "key" watch type is used to watch a specific key in the KV store. The "key" watch type is used to watch a specific key in the KV store.
It requires that the "key" parameter be specified. It requires that the "key" parameter be specified.
@ -138,11 +138,13 @@ Here is an example configuration:
Or, using the watch command: Or, using the watch command:
$ consul watch -type=key -key=foo/bar/baz /usr/bin/my-key-handler.sh ```shell
$ consul watch -type=key -key=foo/bar/baz /usr/bin/my-key-handler.sh
```
An example of the output of this command: An example of the output of this command:
```javascript ```json
{ {
"Key": "foo/bar/baz", "Key": "foo/bar/baz",
"CreateIndex": 1793, "CreateIndex": 1793,
@ -154,7 +156,7 @@ An example of the output of this command:
} }
``` ```
### <a name="keyprefix"></a>Type: keyprefix ### Type: keyprefix ((#keyprefix))
The "keyprefix" watch type is used to watch a prefix of keys in the KV store. The "keyprefix" watch type is used to watch a prefix of keys in the KV store.
It requires that the "prefix" parameter be specified. This watch It requires that the "prefix" parameter be specified. This watch
@ -165,7 +167,7 @@ This maps to the `/v1/kv/` API internally.
Here is an example configuration: Here is an example configuration:
```javascript ```json
{ {
"type": "keyprefix", "type": "keyprefix",
"prefix": "foo/", "prefix": "foo/",
@ -175,12 +177,14 @@ Here is an example configuration:
Or, using the watch command: Or, using the watch command:
$ consul watch -type=keyprefix -prefix=foo/ /usr/bin/my-prefix-handler.sh ```shell
$ consul watch -type=keyprefix -prefix=foo/ /usr/bin/my-prefix-handler.sh
```
An example of the output of this command: An example of the output of this command:
```javascript ```text
;[ [
{ {
Key: 'foo/bar', Key: 'foo/bar',
CreateIndex: 1796, CreateIndex: 1796,
@ -211,7 +215,7 @@ An example of the output of this command:
] ]
``` ```
### <a name="services"></a>Type: services ### Type: services ((#services))
The "services" watch type is used to watch the list of available The "services" watch type is used to watch the list of available
services. It has no parameters. services. It has no parameters.
@ -220,7 +224,7 @@ This maps to the `/v1/catalog/services` API internally.
An example of the output of this command: An example of the output of this command:
```javascript ```json
{ {
"consul": [], "consul": [],
"redis": [], "redis": [],
@ -228,7 +232,7 @@ An example of the output of this command:
} }
``` ```
### <a name="nodes"></a>Type: nodes ### Type: nodes ((#nodes))
The "nodes" watch type is used to watch the list of available The "nodes" watch type is used to watch the list of available
nodes. It has no parameters. nodes. It has no parameters.
@ -237,8 +241,8 @@ This maps to the `/v1/catalog/nodes` API internally.
An example of the output of this command: An example of the output of this command:
```javascript ```text
;[ [
{ {
Node: 'nyc1-consul-1', Node: 'nyc1-consul-1',
Address: '192.241.159.115', Address: '192.241.159.115',
@ -262,11 +266,11 @@ An example of the output of this command:
{ {
Node: 'nyc1-worker-3', Node: 'nyc1-worker-3',
Address: '162.243.162.229', Address: '162.243.162.229',
}, }
] ]
``` ```
### <a name="service"></a>Type: service ### Type: service ((#service))
The "service" watch type is used to monitor the providers The "service" watch type is used to monitor the providers
of a single service. It requires the "service" parameter of a single service. It requires the "service" parameter
@ -280,7 +284,7 @@ This maps to the `/v1/health/service` API internally.
Here is an example configuration with a single tag: Here is an example configuration with a single tag:
```javascript ```json
{ {
"type": "service", "type": "service",
"service": "redis", "service": "redis",
@ -291,7 +295,7 @@ Here is an example configuration with a single tag:
Here is an example configuration with multiple tags: Here is an example configuration with multiple tags:
```javascript ```json
{ {
"type": "service", "type": "service",
"service": "redis", "service": "redis",
@ -304,16 +308,20 @@ Or, using the watch command:
Single tag: Single tag:
$ consul watch -type=service -service=redis -tag=bar /usr/bin/my-service-handler.sh ```shell
$ consul watch -type=service -service=redis -tag=bar /usr/bin/my-service-handler.sh
```
Multiple tag: Multiple tag:
$ consul watch -type=service -service=redis -tag=bar -tag=foo /usr/bin/my-service-handler.sh ```shell
$ consul watch -type=service -service=redis -tag=bar -tag=foo /usr/bin/my-service-handler.sh
```
An example of the output of this command: An example of the output of this command:
```javascript ```text
;[ [
{ {
Node: { Node: {
Node: 'foobar', Node: 'foobar',
@ -347,11 +355,11 @@ An example of the output of this command:
ServiceName: '', ServiceName: '',
}, },
], ],
}, }
] ]
``` ```
### <a name="checks"></a>Type: checks ### Type: checks ((#checks))
The "checks" watch type is used to monitor the checks of a given The "checks" watch type is used to monitor the checks of a given
service or those in a specific state. It optionally takes the "service" service or those in a specific state. It optionally takes the "service"
@ -385,11 +393,15 @@ Or, using the watch command:
State: State:
$ consul watch -type=checks -state=passing /usr/bin/my-check-handler.sh -passing ```shell
$ consul watch -type=checks -state=passing /usr/bin/my-check-handler.sh -passing
```
Service: Service:
$ consul watch -type=checks -service=redis /usr/bin/my-check-handler.sh -redis ```shell
$ consul watch -type=checks -service=redis /usr/bin/my-check-handler.sh -redis
```
An example of the output of this command: An example of the output of this command:
@ -408,7 +420,7 @@ An example of the output of this command:
] ]
``` ```
### <a name="event"></a>Type: event ### Type: event ((#event))
The "event" watch type is used to monitor for custom user The "event" watch type is used to monitor for custom user
events. These are fired using the [consul event](/docs/commands/event.html) command. events. These are fired using the [consul event](/docs/commands/event.html) command.
@ -419,7 +431,7 @@ This maps to the `/v1/event/list` API internally.
Here is an example configuration: Here is an example configuration:
```javascript ```json
{ {
"type": "event", "type": "event",
"name": "web-deploy", "name": "web-deploy",
@ -429,11 +441,13 @@ Here is an example configuration:
Or, using the watch command: Or, using the watch command:
$ consul watch -type=event -name=web-deploy /usr/bin/my-event-handler.sh -web-deploy ```shell
$ consul watch -type=event -name=web-deploy /usr/bin/my-event-handler.sh -web-deploy
```
An example of the output of this command: An example of the output of this command:
```javascript ```json
[ [
{ {
"ID": "f07f3fcc-4b7d-3a7c-6d1e-cf414039fcee", "ID": "f07f3fcc-4b7d-3a7c-6d1e-cf414039fcee",
@ -451,4 +465,6 @@ An example of the output of this command:
To fire a new `web-deploy` event the following could be used: To fire a new `web-deploy` event the following could be used:
$ consul event -name=web-deploy 1609030 ```shell
$ consul event -name=web-deploy 1609030
```

View File

@ -15,7 +15,7 @@ If you are getting an error message you don't see listed on this page, please co
### Multiple network interfaces ### Multiple network interfaces
``` ```text
Multiple private IPv4 addresses found. Please configure one with 'bind' and/or 'advertise'. Multiple private IPv4 addresses found. Please configure one with 'bind' and/or 'advertise'.
``` ```
@ -25,21 +25,21 @@ Your server has multiple active network interfaces. Consul needs to know which i
### Configuration syntax errors ### Configuration syntax errors
``` ```text
Error parsing config.hcl: At 1:12: illegal char Error parsing config.hcl: At 1:12: illegal char
``` ```
``` ```text
Error parsing config.hcl: At 1:32: key 'foo' expected start of object ('{') or assignment ('=') Error parsing config.hcl: At 1:32: key 'foo' expected start of object ('{') or assignment ('=')
``` ```
``` ```text
Error parsing server.json: invalid character '`' looking for beginning of value Error parsing server.json: invalid character '`' looking for beginning of value
``` ```
There is a syntax error in your configuration file. If the error message doesn't identify the exact location in the file where the problem is, try using [jq] to find it, for example: There is a syntax error in your configuration file. If the error message doesn't identify the exact location in the file where the problem is, try using [jq] to find it, for example:
``` ```shell
$ consul agent -server -config-file server.json $ consul agent -server -config-file server.json
==> Error parsing server.json: invalid character '`' looking for beginning of value ==> Error parsing server.json: invalid character '`' looking for beginning of value
$ cat server.json | jq . $ cat server.json | jq .
@ -48,7 +48,7 @@ parse error: Invalid numeric literal at line 3, column 29
## Invalid host name ## Invalid host name
``` ```text
Node name "consul_client.internal" will not be discoverable via DNS due to invalid characters. Node name "consul_client.internal" will not be discoverable via DNS due to invalid characters.
``` ```
@ -56,11 +56,11 @@ Add the [`node name`][node_name] option to your agent configuration and provide
## I/O timeouts ## I/O timeouts
``` ```text
Failed to join 10.0.0.99: dial tcp 10.0.0.99:8301: i/o timeout Failed to join 10.0.0.99: dial tcp 10.0.0.99:8301: i/o timeout
``` ```
``` ```text
Failed to sync remote state: No cluster leader Failed to sync remote state: No cluster leader
``` ```
@ -70,7 +70,7 @@ If they are not on the same LAN, check the [`retry_join`][retry_join] settings i
## Deadline exceeded ## Deadline exceeded
``` ```text
Error getting server health from "XXX": context deadline exceeded Error getting server health from "XXX": context deadline exceeded
``` ```
@ -78,11 +78,11 @@ These error messages indicate a general performance problem on the Consul server
## Too many open files ## Too many open files
``` ```text
Error accepting TCP connection: accept tcp [::]:8301: too many open files in system Error accepting TCP connection: accept tcp [::]:8301: too many open files in system
``` ```
``` ```text
Get http://localhost:8500/: dial tcp 127.0.0.1:31643: socket: too many open files Get http://localhost:8500/: dial tcp 127.0.0.1:31643: socket: too many open files
``` ```
@ -100,7 +100,7 @@ This has been a [known issue](https://github.com/docker/libnetwork/issues/1204)
## ACL Not Found ## ACL Not Found
``` ```text
RPC error making call: rpc error making call: ACL not found RPC error making call: rpc error making call: ACL not found
``` ```
@ -110,11 +110,11 @@ This indicates that you have ACL enabled in your cluster, but you aren't passing
### Incorrect certificate or certificate name ### Incorrect certificate or certificate name
``` ```text
Remote error: tls: bad certificate Remote error: tls: bad certificate
``` ```
``` ```text
X509: certificate signed by unknown authority X509: certificate signed by unknown authority
``` ```
@ -124,11 +124,11 @@ If you generate your own certificates, make sure the server certificates include
### HTTP instead of HTTPS ### HTTP instead of HTTPS
``` ```text
Error querying agent: malformed HTTP response Error querying agent: malformed HTTP response
``` ```
``` ```text
Net/http: HTTP/1.x transport connection broken: malformed HTTP response "\x15\x03\x01\x00\x02\x02" Net/http: HTTP/1.x transport connection broken: malformed HTTP response "\x15\x03\x01\x00\x02\x02"
``` ```
@ -140,7 +140,7 @@ If you are interacting with the API, change the URI scheme to "https".
## License warnings ## License warnings
``` ```text
License: expiration time: YYYY-MM-DD HH:MM:SS -0500 EST, time left: 29m0s License: expiration time: YYYY-MM-DD HH:MM:SS -0500 EST, time left: 29m0s
``` ```

View File

@ -29,9 +29,7 @@ is "deny all", then all Connect connections are denied by default.
## Intention Basics ## Intention Basics
Intentions can be managed via the Intentions can be managed via the [API](#), [CLI](#),
[API](#),
[CLI](#),
or UI. Please see the respective documentation for each for full details or UI. Please see the respective documentation for each for full details
on options, flags, etc. on options, flags, etc.
Below is an example of a basic intention to show the basic attributes Below is an example of a basic intention to show the basic attributes
@ -70,7 +68,7 @@ Arbitrary string key/value data may be associated with intentions. This
is unused by Consul but can be used by external systems or for visibility is unused by Consul but can be used by external systems or for visibility
in the UI. in the UI.
``` ```shell
$ consul intention create \ $ consul intention create \
-deny \ -deny \
-meta description='Hello there' \ -meta description='Hello there' \

View File

@ -9,10 +9,10 @@ description: >-
you can plug in a gateway of your choice. you can plug in a gateway of your choice.
--- ---
-> **1.6.0+:** This feature is available in Consul versions 1.6.0 and newer.
# Mesh Gateways # Mesh Gateways
-> **1.6.0+:** This feature is available in Consul versions 1.6.0 and newer.
Mesh gateways enable routing of Connect traffic between different Consul datacenters. Those datacenters Mesh gateways enable routing of Connect traffic between different Consul datacenters. Those datacenters
can reside in different clouds or runtime environments where general interconnectivity between all services can reside in different clouds or runtime environments where general interconnectivity between all services
in all datacenters isn't feasible. These gateways operate by sniffing the SNI header out of the Connect session in all datacenters isn't feasible. These gateways operate by sniffing the SNI header out of the Connect session

View File

@ -43,9 +43,7 @@ An overview of the sequence is shown below. The diagram and the following
details may seem complex, but this is a _regular mutual TLS connection_ with details may seem complex, but this is a _regular mutual TLS connection_ with
an API call to verify the incoming client certificate. an API call to verify the incoming client certificate.
<div class="center"> ![Native Integration Overview](/img/connect-native-overview.png)
![Native Integration Overview](connect-native-overview.png)
</div>
Details on the steps are below: Details on the steps are below:

View File

@ -42,8 +42,7 @@ compatible Envoy versions.
| 1.5.0, 1.5.1 | 1.9.1, 1.8.0† | | 1.5.0, 1.5.1 | 1.9.1, 1.8.0† |
| 1.3.x, 1.4.x | 1.9.1, 1.8.0†, 1.7.0† | | 1.3.x, 1.4.x | 1.9.1, 1.8.0†, 1.7.0† |
~> Note: ~> **Note:** † Envoy versions lower than 1.9.1 are vulnerable to
† Envoy versions lower than 1.9.1 are vulnerable to
[CVE-2019-9900](https://github.com/envoyproxy/envoy/issues/6434) and [CVE-2019-9900](https://github.com/envoyproxy/envoy/issues/6434) and
[CVE-2019-9901](https://github.com/envoyproxy/envoy/issues/6435). Both are [CVE-2019-9901](https://github.com/envoyproxy/envoy/issues/6435). Both are
related to HTTP request parsing and so only affect Consul Connect users if they related to HTTP request parsing and so only affect Consul Connect users if they
@ -124,7 +123,6 @@ definition](/docs/connect/registration/service-registration.html) or
pod in a Kubernetes cluster to learn of a pod-specific IP address for StatsD pod in a Kubernetes cluster to learn of a pod-specific IP address for StatsD
when the Envoy instance is bootstrapped while still allowing global when the Envoy instance is bootstrapped while still allowing global
configuration of all proxies to use StatsD in the [global `proxy-defaults` configuration of all proxies to use StatsD in the [global `proxy-defaults`
configuration entry](/docs/agent/config-entries/proxy-defaults.html). The env variable must contain a full valid URL configuration entry](/docs/agent/config-entries/proxy-defaults.html). The env variable must contain a full valid URL
value as specified above and nothing else. It is not currently possible to use value as specified above and nothing else. It is not currently possible to use
environment variables as only part of the URL. environment variables as only part of the URL.

View File

@ -198,10 +198,7 @@ Telegraf.
Here is an example Grafana dashboard: Here is an example Grafana dashboard:
<div class="center"> [![Grafana Consul Cluster](/img/grafana-screenshot.png)](/img/grafana-screenshot.png)
[![Grafana Consul
Cluster](/img/grafana-screenshot.png)](/img/grafana-screenshot.png)
</div>
## Metric Aggregates and Alerting from Telegraf ## Metric Aggregates and Alerting from Telegraf

View File

@ -94,7 +94,6 @@ should display the output from the static service.
```bash ```bash
kubectl describe service ambassador kubectl describe service ambassador
``` ```
## Enabling end-to-end TLS ## Enabling end-to-end TLS
@ -157,7 +156,9 @@ You should now be able to test the SSL connection from your browser.
When Ambassador is unable to establish an authenticated connection to the Connect proxy servers, browser connections will display this message: When Ambassador is unable to establish an authenticated connection to the Connect proxy servers, browser connections will display this message:
upstream connect error or disconnect/reset before headers ```text
upstream connect error or disconnect/reset before headers
```
This error can have a number of different causes. Here are some things to check and troubleshooting steps you can take. This error can have a number of different causes. Here are some things to check and troubleshooting steps you can take.
@ -167,88 +168,112 @@ If you followed the above installation guide, Consul should have registered a se
To check whether Ambassador is allowed to connect, use the [`intention check`][intention-check] subcommand. To check whether Ambassador is allowed to connect, use the [`intention check`][intention-check] subcommand.
$ consul intention check ambassador http-echo ```shell
Allowed $ consul intention check ambassador http-echo
Allowed
```
### Confirm upstream proxy sidecar is running ### Confirm upstream proxy sidecar is running
First, find the name of the pod that contains your service. First, find the name of the pod that contains your service.
$ kubectl get pods -l app=http-echo,role=server ```shell
NAME READY STATUS RESTARTS AGE $ kubectl get pods -l app=http-echo,role=server
http-echo-7fb79566d6-jmccp 2/2 Running 0 1d NAME READY STATUS RESTARTS AGE
http-echo-7fb79566d6-jmccp 2/2 Running 0 1d
```
Then describe the pod to make sure that the sidecar is present and running. Then describe the pod to make sure that the sidecar is present and running.
$ kubectl describe pod http-echo-7fb79566d6-jmccp ```shell
[...] $ kubectl describe pod http-echo-7fb79566d6-jmccp
Containers: [...]
Containers:
consul-connect-envoy-sidecar: consul-connect-envoy-sidecar:
[...] [...]
State: Running State: Running
Ready: True Ready: True
```
### Start up a downstream proxy and try connecting to it ### Start up a downstream proxy and try connecting to it
Log into one of your Consul server pods (or any pod that has a Consul binary in it). Log into one of your Consul server pods (or any pod that has a Consul binary in it).
$ kubectl exec -ti consul-server-0 -- /bin/sh ```shell
$ kubectl exec -ti consul-server-0 -- /bin/sh
```
Once inside the pod, try starting a test proxy. Use the name of your service in place of `http-echo`. Once inside the pod, try starting a test proxy. Use the name of your service in place of `http-echo`.
# consul connect proxy -service=ambassador -upstream http-echo:1234 ```shell
==> Consul Connect proxy starting... # consul connect proxy -service=ambassador -upstream http-echo:1234
Configuration mode: Flags ==> Consul Connect proxy starting...
Configuration mode: Flags
Service: http-echo-client Service: http-echo-client
Upstream: http-echo => :1234 Upstream: http-echo => :1234
Public listener: Disabled Public listener: Disabled
```
If the proxy starts successfully, try connecting to it. Verify the output is as you expect. If the proxy starts successfully, try connecting to it. Verify the output is as you expect.
# curl localhost:1234 ```shell
"hello world" # curl localhost:1234
"hello world"
```
Don't forget to kill the test proxy when you're done. Don't forget to kill the test proxy when you're done.
# kill %1 ```shell
==> Consul Connect proxy shutdown # kill %1
==> Consul Connect proxy shutdown
# exit # exit
```
### Check Ambassador Connect sidecar logs ### Check Ambassador Connect sidecar logs
Find the name of the Connect Integration pod and make sure it is running. Find the name of the Connect Integration pod and make sure it is running.
$ kubectl get pods -l app=ambassador-pro,component=consul-connect ```shell
NAME READY STATUS RESTARTS AGE $ kubectl get pods -l app=ambassador-pro,component=consul-connect
ambassador-pro-consul-connect-integration-f88fcb99f-hxk75 1/1 Running 0 1d NAME READY STATUS RESTARTS AGE
ambassador-pro-consul-connect-integration-f88fcb99f-hxk75 1/1 Running 0 1d
```
Dump the logs from the integration pod. If the service is running correctly, there won't be much in there. Dump the logs from the integration pod. If the service is running correctly, there won't be much in there.
$ kubectl logs ambassador-pro-consul-connect-integration-f88fcb99f-hxk75 ```shell
$ kubectl logs ambassador-pro-consul-connect-integration-f88fcb99f-hxk75
time="2019-03-13T19:42:12Z" level=info msg="Starting Consul Connect Integration" consul_host=10.142.0.21 consul_port=8500 version=0.2.3 time="2019-03-13T19:42:12Z" level=info msg="Starting Consul Connect Integration" consul_host=10.142.0.21 consul_port=8500 version=0.2.3
2019/03/13 19:42:12 Watching CA leaf for ambassador 2019/03/13 19:42:12 Watching CA leaf for ambassador
time="2019-03-13T19:42:12Z" level=debug msg="Computed kubectl command and arguments" args="[kubectl apply -f -]" time="2019-03-13T19:42:12Z" level=debug msg="Computed kubectl command and arguments" args="[kubectl apply -f -]"
time="2019-03-13T19:42:14Z" level=info msg="Updating TLS certificate secret" namespace= secret=ambassador-consul-connect time="2019-03-13T19:42:14Z" level=info msg="Updating TLS certificate secret" namespace= secret=ambassador-consul-connect
```
### Check Ambassador logs ### Check Ambassador logs
Make sure the Ambassador pod itself is running. Make sure the Ambassador pod itself is running.
$ kubectl get pods -l service=ambassador ```shell
NAME READY STATUS RESTARTS AGE $ kubectl get pods -l service=ambassador
ambassador-655875b5d9-vpc2v 2/2 Running 0 1d NAME READY STATUS RESTARTS AGE
ambassador-655875b5d9-vpc2v 2/2 Running 0 1d
```
Finally, check the logs for the main Ambassador pod. Finally, check the logs for the main Ambassador pod.
$ kubectl logs ambassador-655875b5d9-vpc2v ```shell
$ kubectl logs ambassador-655875b5d9-vpc2v
```
### Check Ambassador admin interface ### Check Ambassador admin interface
Forward the admin port from the Ambassador pod to your local machine. Forward the admin port from the Ambassador pod to your local machine.
$ kubectl port-forward pods/ambassador-655875b5d9-vpc2v 8877:8877 ```shell
$ kubectl port-forward pods/ambassador-655875b5d9-vpc2v 8877:8877
```
You should then be able to open http://localhost:8877/ambassador/v0/diag/ in your browser and view Ambassador's routing table. The table lists each URL mapping that has been set up. Service names will appear in green if Ambassador believes they are healthy, and red otherwise. You should then be able to open http://localhost:8877/ambassador/v0/diag/ in your browser and view Ambassador's routing table. The table lists each URL mapping that has been set up. Service names will appear in green if Ambassador believes they are healthy, and red otherwise.

View File

@ -1,7 +1,7 @@
--- ---
layout: docs layout: docs
page_title: Connect Sidecar - Kubernetes page_title: Connect Sidecar - Kubernetes
sidebar_title: 'Kubernetes' sidebar_title: 'Connect Sidecar'
sidebar_current: docs-platform-k8s-connect sidebar_current: docs-platform-k8s-connect
description: >- description: >-
Connect is a feature built into to Consul that enables automatic Connect is a feature built into to Consul that enables automatic
@ -450,7 +450,7 @@ There are three options available:
connectInject: connectInject:
enabled: true enabled: true
consulNamespaces: consulNamespaces:
consulDestinationNamespace: "my-consul-ns" consulDestinationNamespace: 'my-consul-ns'
``` ```
-> **NOTE:** If the destination namespace does not exist we will create it. -> **NOTE:** If the destination namespace does not exist we will create it.
@ -461,7 +461,7 @@ There are three options available:
This can be configured with: This can be configured with:
````yaml ```yaml
global: global:
enableConsulNamespaces: true enableConsulNamespaces: true
@ -471,8 +471,6 @@ There are three options available:
mirroringK8S: true mirroringK8S: true
``` ```
````
1. **Mirror Namespaces With Prefix** - Register each Kubernetes pod into a Consul namespace with the same name as its Kubernetes 1. **Mirror Namespaces With Prefix** - Register each Kubernetes pod into a Consul namespace with the same name as its Kubernetes
namespace **with a prefix**. namespace **with a prefix**.
For example, given a prefix `k8s-`, pod `foo` in Kubernetes namespace `ns-1` will be synced to the Consul namespace `k8s-ns-1`. For example, given a prefix `k8s-`, pod `foo` in Kubernetes namespace `ns-1` will be synced to the Consul namespace `k8s-ns-1`.
@ -487,7 +485,7 @@ There are three options available:
enabled: true enabled: true
consulNamespaces: consulNamespaces:
mirroringK8S: true mirroringK8S: true
mirroringK8SPrefix: "k8s-" mirroringK8SPrefix: 'k8s-'
``` ```
### Consul Enterprise Namespace Upstreams ### Consul Enterprise Namespace Upstreams

File diff suppressed because it is too large Load Diff

View File

@ -51,18 +51,16 @@ Kubernetes can choose to leverage Consul.
There are several ways to try Consul with Kubernetes in different environments. There are several ways to try Consul with Kubernetes in different environments.
Guides **Guides**
- The [Getting Started with Consul Service Mesh track](https://learn.hashicorp.com/consul/gs-consul-service-mesh/understand-consul-service-mesh?utm_source=WEBSITE&utm_medium=WEB_IO&utm_offer=ARTICLE_PAGE&utm_content=DOCS) - The [Getting Started with Consul Service Mesh track](https://learn.hashicorp.com/consul/gs-consul-service-mesh/understand-consul-service-mesh?utm_source=WEBSITE&utm_medium=WEB_IO&utm_offer=ARTICLE_PAGE&utm_content=DOCS)
walks you through installing Consul as service mesh for Kubernetes using the Helm walks you through installing Consul as service mesh for Kubernetes using the Helm
chart, deploying services in the service mesh, and using intentions to secure service chart, deploying services in the service mesh, and using intentions to secure service
communications. communications.
- The [Consul and minikube guide](https://learn.hashicorp.com/consul/ - The [Consul and minikube guide](https://learn.hashicorp.com/consul/getting-started-k8s/minikube?utm_source=consul.io&utm_medium=docs) is a quick walk through of how to deploy Consul with the official Helm chart on a local instance of Minikube.
getting-started-k8s/minikube?utm_source=consul.io&utm_medium=docs) is a quick walk through of how to deploy Consul with the official Helm chart on a local instance of Minikube.
- The [Deploying Consul with Kubernetes guide](https://learn.hashicorp.com/ - The [Deploying Consul with Kubernetes guide](https://learn.hashicorp.com/consul/getting-started-k8s/minikube?utm_source=consul.io&utm_medium=docs)
consul/getting-started-k8s/minikube?utm_source=consul.io&utm_medium=docs)
walks you through deploying Consul on Kubernetes with the official Helm chart and can be applied to any Kubernetes installation type. walks you through deploying Consul on Kubernetes with the official Helm chart and can be applied to any Kubernetes installation type.
- Review production best practices and cloud-specific configurations for deploying Consul on managed Kubernetes runtimes. - Review production best practices and cloud-specific configurations for deploying Consul on managed Kubernetes runtimes.
@ -76,7 +74,7 @@ Guides
- The [Consul and Kubernetes Deployment](https://learn.hashicorp.com/consul/day-1-operations/kubernetes-deployment-guide?utm_source=consul.io&utm_medium=docs) guide covers the necessary steps to install and configure a new Consul cluster on Kubernetes in production. - The [Consul and Kubernetes Deployment](https://learn.hashicorp.com/consul/day-1-operations/kubernetes-deployment-guide?utm_source=consul.io&utm_medium=docs) guide covers the necessary steps to install and configure a new Consul cluster on Kubernetes in production.
Documentation **Documentation**
- [Installing Consul](/docs/platform/k8s/run.html) covers how to install Consul using the Helm chart. - [Installing Consul](/docs/platform/k8s/run.html) covers how to install Consul using the Helm chart.
- [Helm Chart Reference](/docs/platform/k8s/helm.html) describes the different options for configuring the Helm chart. - [Helm Chart Reference](/docs/platform/k8s/helm.html) describes the different options for configuring the Helm chart.

View File

@ -10,7 +10,7 @@ description: Using predefined Persistent Volume Claims
The only way to use a pre-created PVC is to name them in the format Kubernetes expects: The only way to use a pre-created PVC is to name them in the format Kubernetes expects:
``` ```text
data-<kubernetes namespace>-<helm release name>-consul-server-<ordinal> data-<kubernetes namespace>-<helm release name>-consul-server-<ordinal>
``` ```
@ -20,7 +20,7 @@ need as many PVCs as you have Consul servers. For example, given a Kubernetes
namespace of "vault," a release name of "consul," and 5 servers, you would need namespace of "vault," a release name of "consul," and 5 servers, you would need
to create PVCs with the following names: to create PVCs with the following names:
``` ```text
data-vault-consul-consul-server-0 data-vault-consul-consul-server-0
data-vault-consul-consul-server-1 data-vault-consul-consul-server-1
data-vault-consul-consul-server-2 data-vault-consul-consul-server-2

View File

@ -20,7 +20,6 @@ If you're **not using Consul Connect**, follow this process.
1. Run a Helm upgrade with the following config: 1. Run a Helm upgrade with the following config:
```yaml ```yaml
global: global:
tls: tls:
@ -33,8 +32,8 @@ If you're **not using Consul Connect**, follow this process.
updatePartition: <number_of_server_replicas> updatePartition: <number_of_server_replicas>
``` ```
This upgrade will trigger a rolling update of the clients, as well as any This upgrade will trigger a rolling update of the clients, as well as any
other `consul-k8s` components, such as sync catalog or client snapshot deployments. other `consul-k8s` components, such as sync catalog or client snapshot deployments.
1. Perform a rolling upgrade of the servers, as described in 1. Perform a rolling upgrade of the servers, as described in
[Upgrade Consul Servers](/docs/platform/k8s/upgrading.html#upgrading-consul-servers). [Upgrade Consul Servers](/docs/platform/k8s/upgrading.html#upgrading-consul-servers).
@ -58,7 +57,6 @@ applications to it.
1. Create the following Helm config file for the upgrade: 1. Create the following Helm config file for the upgrade:
```yaml ```yaml
global: global:
tls: tls:

View File

@ -322,35 +322,17 @@ There are three options available:
syncCatalog: syncCatalog:
enabled: true enabled: true
consulNamespaces: consulNamespaces:
consulDestinationNamespace: "my-consul-ns" consulDestinationNamespace: 'my-consul-ns'
``` ```
1. **Mirror Namespaces** - Each Kubernetes service will be synced to a Consul namespace with the same name as its Kubernetes namespace. 1. **Mirror Namespaces** - Each Kubernetes service will be synced to a Consul namespace with the same
name as its Kubernetes namespace.
For example, service `foo` in Kubernetes namespace `ns-1` will be synced to the Consul namespace `ns-1`. For example, service `foo` in Kubernetes namespace `ns-1` will be synced to the Consul namespace `ns-1`.
If a mirrored namespace does not exist in Consul, it will be created. If a mirrored namespace does not exist in Consul, it will be created.
This can be configured with: This can be configured with:
````yaml
global:
enableConsulNamespaces: true
syncCatalog:
enabled: true
consulNamespaces:
mirroringK8S: true
addK8SNamespaceSuffix: false
```
````
1. **Mirror Namespaces With Prefix** - Each Kubernetes service will be synced to a Consul namespace with the same name as its Kubernetes
namespace **with a prefix**.
For example, given a prefix `k8s-`, service `foo` in Kubernetes namespace `ns-1` will be synced to the Consul namespace `k8s-ns-1`.
This can be configured with:
```yaml ```yaml
global: global:
enableConsulNamespaces: true enableConsulNamespaces: true
@ -359,7 +341,26 @@ There are three options available:
enabled: true enabled: true
consulNamespaces: consulNamespaces:
mirroringK8S: true mirroringK8S: true
mirroringK8SPrefix: "k8s-"
addK8SNamespaceSuffix: false
```
1. **Mirror Namespaces With Prefix** - Each Kubernetes service will be synced to a Consul namespace
with the same name as its Kubernetes namespace **with a prefix**. For example, given a prefix
`k8s-`, service `foo` in Kubernetes namespace `ns-1` will be synced to the Consul namespace
`k8s-ns-1`.
This can be configured with:
```yaml
global:
enableConsulNamespaces: true
syncCatalog:
enabled: true
consulNamespaces:
mirroringK8S: true
mirroringK8SPrefix: 'k8s-'
addK8SNamespaceSuffix: false addK8SNamespaceSuffix: false
``` ```

View File

@ -14,10 +14,7 @@ The HashiCorp Consul Integration Program enables vendors to build integrations w
By leveraging Consuls RESTful HTTP API system, vendors are able to build extensible integrations at the data plane, platform, and the infrastructure layer to extend Consuls functionalities. These integrations can be performed with the OSS (open source) version of Consul. Integrations with advanced network segmentation, advanced federation, and advanced read scalability need to be tested against Consul Enterprise, since these features are only supported by Consul Enterprise. By leveraging Consuls RESTful HTTP API system, vendors are able to build extensible integrations at the data plane, platform, and the infrastructure layer to extend Consuls functionalities. These integrations can be performed with the OSS (open source) version of Consul. Integrations with advanced network segmentation, advanced federation, and advanced read scalability need to be tested against Consul Enterprise, since these features are only supported by Consul Enterprise.
<div class="center"> [![Consul Architecture](/img/consul_ecosystem_diagram.png)](/img/consul_ecosystem_diagram.png)
[![Consul
Architecture](/img/consul_ecosystem_diagram.png)](/img/consul_ecosystem_diagram.png)
</div>
**Data Plane**: These integrations automate IP updates of load balancers by leveraging Consul service discovery, automate firewall security policy updates by leveraging Consul intentions within a centralized management tool, extend sidecar proxies to support Consul connect, and extend API gateways to allow Consul to route incoming traffic to the proxies for Connect-enabled services. **Data Plane**: These integrations automate IP updates of load balancers by leveraging Consul service discovery, automate firewall security policy updates by leveraging Consul intentions within a centralized management tool, extend sidecar proxies to support Consul connect, and extend API gateways to allow Consul to route incoming traffic to the proxies for Connect-enabled services.