anchor link fixes across a lot of pages

This commit is contained in:
Jeff Escalante 2020-04-08 20:09:01 -04:00
parent 711352bcf1
commit c23cda3389
No known key found for this signature in database
GPG Key ID: 32D23C61AB5450DB
25 changed files with 852 additions and 1802 deletions

View File

@ -10,10 +10,10 @@ description: >-
them. It is very similar to AWS IAM in many ways.
---
-> **1.3.0 and earlier:** This document only applies in Consul versions 1.3.0 and before. If you are using version 1.4.0 or later please use the updated documentation [here](/docs/acl/acl-system.html)
# ACL System in Legacy Mode
-> **1.3.0 and earlier:** This document only applies in Consul versions 1.3.0 and before. If you are using version 1.4.0 or later please use the updated documentation [here](/docs/acl/acl-system.html)
~> **Alert: Deprecation Notice**
The ACL system described here was Consul's original ACL implementation. In Consul 1.4.0
the ACL system was rewritten and the legacy system was deprecated. The new ACL system information can be found [here](/docs/acl/acl-system.html).
@ -327,7 +327,7 @@ Once the ACL system is bootstrapped, ACL tokens can be managed through the
After the servers are restarted above, you will see new errors in the logs of the Consul
servers related to permission denied errors:
```
```text
2017/07/08 23:38:24 [WARN] agent: Node info update blocked by ACLs
2017/07/08 23:38:44 [WARN] agent: Coordinate update blocked by ACLs
```
@ -381,7 +381,7 @@ $ curl \
With that ACL agent token set, the servers will be able to sync themselves with the
catalog:
```
```text
2017/07/08 23:42:59 [INFO] agent: Synced node info
```
@ -482,7 +482,7 @@ node-2 127.0.0.2:8301 alive client 0.9.0dev 2 dc1
The anonymous token is also used for DNS lookups since there's no way to pass a
token as part of a DNS request. Here's an example lookup for the "consul" service:
```
```shell
$ dig @127.0.0.1 -p 8600 consul.service.consul
; <<>> DiG 9.8.3-P1 <<>> @127.0.0.1 -p 8600 consul.service.consul
@ -524,7 +524,7 @@ $ curl \
With that new policy in place, the DNS lookup will succeed:
```
```shell
$ dig @127.0.0.1 -p 8600 consul.service.consul
; <<>> DiG 9.8.3-P1 <<>> @127.0.0.1 -p 8600 consul.service.consul
@ -1082,9 +1082,7 @@ name that starts with "admin".
## Advanced Topics
<a name="replication"></a>
#### Outages and ACL Replication
#### Outages and ACL Replication ((#replication))
The Consul ACL system is designed with flexible rules to accommodate for an outage
of the [`acl_datacenter`](/docs/agent/options.html#acl_datacenter) or networking
@ -1146,9 +1144,7 @@ using a process like this:
4. Rolling restart the agents in other datacenters and change their `acl_datacenter`
configuration to the target datacenter.
<a name="version_8_acls"></a>
#### Complete ACL Coverage in Consul 0.8
#### Complete ACL Coverage in Consul 0.8 ((#version_8_acls))
Consul 0.8 added many more ACL policy types and brought ACL enforcement to Consul
agents for the first time. To ease the transition to Consul 0.8 for existing ACL

View File

@ -10,10 +10,10 @@ description: >-
them. It is very similar to AWS IAM in many ways.
---
-> **1.4.0 and later:** This document only applies in Consul versions 1.4.0 and later. The documentation for the legacy ACL system is [here](/docs/acl/acl-legacy.html)
# ACL Rules
-> **1.4.0 and later:** This document only applies in Consul versions 1.4.0 and later. The documentation for the legacy ACL system is [here](/docs/acl/acl-legacy.html)
Consul provides an optional Access Control List (ACL) system which can be used
to control access to data and APIs. To learn more about Consul's ACL review the
[ACL system documentation](/docs/acl/acl-system.html)

View File

@ -10,10 +10,10 @@ description: >-
them. It is very similar to AWS IAM in many ways.
---
-> **1.4.0 and later:** This guide only applies in Consul versions 1.4.0 and later. The documentation for the legacy ACL system is [here](/docs/acl/acl-legacy.html)
# ACL System
-> **1.4.0 and later:** This guide only applies in Consul versions 1.4.0 and later. The documentation for the legacy ACL system is [here](/docs/acl/acl-legacy.html)
Consul provides an optional Access Control List (ACL) system which can be used to control access to data and APIs.
The ACL is [Capability-based](https://en.wikipedia.org/wiki/Capability-based_security), relying on tokens which
are associated with policies to determine which fine grained rules can be applied. Consul's capability based
@ -101,7 +101,7 @@ applied as a policy with the following preconfigured [ACL
rules](/docs/acl/acl-system.html#acl-rules-and-scope):
```hcl
// Allow the service and its sidecar proxy to register into the catalog.
# Allow the service and its sidecar proxy to register into the catalog.
service "<Service Name>" {
policy = "write"
}
@ -109,7 +109,7 @@ service "<Service Name>-sidecar-proxy" {
policy = "write"
}
// Allow for any potential upstreams to be resolved.
# Allow for any potential upstreams to be resolved.
service_prefix "" {
policy = "read"
}

View File

@ -9,10 +9,10 @@ description: >-
within the local datacenter.
---
-> **1.5.0+:** This guide only applies in Consul versions 1.5.0 and newer.
# ACL Auth Methods
-> **1.5.0+:** This guide only applies in Consul versions 1.5.0 and newer.
An auth method is a component in Consul that performs authentication against a
trusted external party to authorize the creation of an ACL tokens usable within
the local datacenter.

View File

@ -76,9 +76,9 @@ needs to have access to two Kubernetes APIs:
- [**TokenReview**](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#create-tokenreview-v1-authentication-k8s-io)
-> Kubernetes should be running with `--service-account-lookup`. This is
defaulted to true in Kubernetes 1.7, but any versions prior should ensure
the Kubernetes API server is started with this setting.
-> Kubernetes should be running with `--service-account-lookup`. This is
defaulted to true in Kubernetes 1.7, but any versions prior should ensure
the Kubernetes API server is started with this setting.
- [**ServiceAccount**](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#read-serviceaccount-v1-core)
(`get`)

View File

@ -48,7 +48,7 @@ There are several different kinds of checks:
blog post](https://www.hashicorp.com/blog/protecting-consul-from-rce-risk-in-specific-configurations)
for more details.
- HTTP + Interval - These checks make an HTTP `GET` request to the specified URL,
- `HTTP + Interval` - These checks make an HTTP `GET` request to the specified URL,
waiting the specified `interval` amount of time between requests (eg. 30 seconds).
The status of the service depends on the HTTP response code: any `2xx` code is
considered passing, a `429 Too ManyRequests` is a warning, and anything else is
@ -66,7 +66,7 @@ There are several different kinds of checks:
Certificate verification can be turned off by setting the `tls_skip_verify`
field to `true` in the check definition.
- TCP + Interval - These checks make a TCP connection attempt to the specified
- `TCP + Interval` - These checks make a TCP connection attempt to the specified
IP/hostname and port, waiting `interval` amount of time between attempts
(e.g. 30 seconds). If no hostname
is specified, it defaults to "localhost". The status of the service depends on
@ -81,7 +81,7 @@ There are several different kinds of checks:
It is possible to configure a custom TCP check timeout value by specifying the
`timeout` field in the check definition.
- <a name="TTL"></a>Time to Live (TTL) - These checks retain their last known state
- `Time to Live (TTL)` ((#ttl)) - These checks retain their last known state
for a given TTL. The state of the check must be updated periodically over the HTTP
interface. If an external system fails to update the status within a given TTL,
the check is set to the failed state. This mechanism, conceptually similar to a
@ -95,7 +95,7 @@ There are several different kinds of checks:
status of the check across restarts. Persisted check status is valid through the
end of the TTL from the time of the last check.
- Docker + Interval - These checks depend on invoking an external application which
- `Docker + Interval` - These checks depend on invoking an external application which
is packaged within a Docker Container. The application is triggered within the running
container via the Docker Exec API. We expect that the Consul agent user has access
to either the Docker HTTP API or the unix socket. Consul uses `$DOCKER_HOST` to
@ -108,7 +108,7 @@ There are several different kinds of checks:
must be configured with [`enable_script_checks`](/docs/agent/options.html#_enable_script_checks)
set to `true` in order to enable Docker health checks.
- gRPC + Interval - These checks are intended for applications that support the standard
- `gRPC + Interval` - These checks are intended for applications that support the standard
[gRPC health checking protocol](https://github.com/grpc/grpc/blob/master/doc/health-checking.md).
The state of the check will be updated by probing the configured endpoint, waiting `interval`
amount of time between probes (eg. 30 seconds). By default, gRPC checks will be configured
@ -120,7 +120,7 @@ There are several different kinds of checks:
`tls_skip_verify` field to `true` in the check definition.
To check on a specific service instead of the whole gRPC server, add the service identifier after the `gRPC` check's endpoint in the following format `/:service_identifier`.
- <a name="alias"></a>Alias - These checks alias the health state of another registered
- `Alias` - These checks alias the health state of another registered
node or service. The state of the check will be updated asynchronously, but is
nearly instant. For aliased services on the same agent, the local state is monitored
and no additional network resources are consumed. For other services and nodes,

View File

@ -8,10 +8,10 @@ description: >-
should satisfy Connect upstream discovery requests for a given service name.
---
-> **1.6.0+:** This config entry is available in Consul versions 1.6.0 and newer.
# Service Resolver
-> **1.6.0+:** This config entry is available in Consul versions 1.6.0 and newer.
The `service-resolver` config entry kind controls which service instances
should satisfy Connect upstream discovery requests for a given service name.

View File

@ -8,10 +8,10 @@ description: >-
manipulation at networking layer 7 (e.g. HTTP).
---
-> **1.6.0+:** This config entry is available in Consul versions 1.6.0 and newer.
# Service Router
-> **1.6.0+:** This config entry is available in Consul versions 1.6.0 and newer.
The `service-router` config entry kind controls Connect traffic routing and
manipulation at networking layer 7 (e.g. HTTP).
@ -158,9 +158,9 @@ routes = [
- `PathRegex` `(string: "")` - Regular expression to match on the HTTP
request path.
The syntax when using the Envoy proxy is [documented here](https://en.cppreference.com/w/cpp/regex/ecmascript).
The syntax when using the Envoy proxy is [documented here](https://en.cppreference.com/w/cpp/regex/ecmascript).
At most only one of `PathExact`, `PathPrefix`, or `PathRegex` may be configured.
At most only one of `PathExact`, `PathPrefix`, or `PathRegex` may be configured.
- `Header` `(array<ServiceRouteHTTPMatchHeader>)` - A set of criteria that can
match on HTTP request headers. If more than one is configured all must match
@ -243,8 +243,8 @@ routes = [
- `PrefixRewrite` `(string: "")` - Defines how to rewrite the HTTP request path
before proxying it to its final destination.
This requires that either `Match.HTTP.PathPrefix` or
`Match.HTTP.PathExact` be configured on this route.
This requires that either `Match.HTTP.PathPrefix` or
`Match.HTTP.PathExact` be configured on this route.
- `RequestTimeout` `(duration: 0s)` - The total amount of time permitted for
the entire downstream request (and retries) to be processed.

View File

@ -10,10 +10,10 @@ description: >-
rewrite or other type of codebase migration).
---
-> **1.6.0+:** This config entry is available in Consul versions 1.6.0 and newer.
# Service Splitter
-> **1.6.0+:** This config entry is available in Consul versions 1.6.0 and newer.
The `service-splitter` config entry kind controls how to split incoming Connect
requests across different subsets of a single service (like during staged
canary rollouts), or perhaps across different services (like during a v2

File diff suppressed because it is too large Load Diff

View File

@ -54,7 +54,7 @@ Anything written to stdout is logged.
Here is an example configuration, where `handler_type` is optionally set to
`script`:
```javascript
```json
{
"type": "key",
"key": "foo/bar/baz",
@ -82,15 +82,15 @@ always sent as a JSON payload.
Here is an example configuration:
```javascript
```json
{
"type": "key",
"key": "foo/bar/baz",
"handler_type": "http",
"http_handler_config": {
"path":"https://localhost:8000/watch",
"path": "https://localhost:8000/watch",
"method": "POST",
"header": {"x-foo":["bar", "baz"]},
"header": { "x-foo": ["bar", "baz"] },
"timeout": "10s",
"tls_skip_verify": false
}
@ -119,7 +119,7 @@ The following types are supported. Detailed documentation on each is below:
- [`checks`](#checks) - Watch the value of health checks
- [`event`](#event) - Watch for custom user events
### <a name="key"></a>Type: key
### Type: key ((#key))
The "key" watch type is used to watch a specific key in the KV store.
It requires that the "key" parameter be specified.
@ -138,11 +138,13 @@ Here is an example configuration:
Or, using the watch command:
$ consul watch -type=key -key=foo/bar/baz /usr/bin/my-key-handler.sh
```shell
$ consul watch -type=key -key=foo/bar/baz /usr/bin/my-key-handler.sh
```
An example of the output of this command:
```javascript
```json
{
"Key": "foo/bar/baz",
"CreateIndex": 1793,
@ -154,7 +156,7 @@ An example of the output of this command:
}
```
### <a name="keyprefix"></a>Type: keyprefix
### Type: keyprefix ((#keyprefix))
The "keyprefix" watch type is used to watch a prefix of keys in the KV store.
It requires that the "prefix" parameter be specified. This watch
@ -165,7 +167,7 @@ This maps to the `/v1/kv/` API internally.
Here is an example configuration:
```javascript
```json
{
"type": "keyprefix",
"prefix": "foo/",
@ -175,12 +177,14 @@ Here is an example configuration:
Or, using the watch command:
$ consul watch -type=keyprefix -prefix=foo/ /usr/bin/my-prefix-handler.sh
```shell
$ consul watch -type=keyprefix -prefix=foo/ /usr/bin/my-prefix-handler.sh
```
An example of the output of this command:
```javascript
;[
```text
[
{
Key: 'foo/bar',
CreateIndex: 1796,
@ -211,7 +215,7 @@ An example of the output of this command:
]
```
### <a name="services"></a>Type: services
### Type: services ((#services))
The "services" watch type is used to watch the list of available
services. It has no parameters.
@ -220,7 +224,7 @@ This maps to the `/v1/catalog/services` API internally.
An example of the output of this command:
```javascript
```json
{
"consul": [],
"redis": [],
@ -228,7 +232,7 @@ An example of the output of this command:
}
```
### <a name="nodes"></a>Type: nodes
### Type: nodes ((#nodes))
The "nodes" watch type is used to watch the list of available
nodes. It has no parameters.
@ -237,8 +241,8 @@ This maps to the `/v1/catalog/nodes` API internally.
An example of the output of this command:
```javascript
;[
```text
[
{
Node: 'nyc1-consul-1',
Address: '192.241.159.115',
@ -262,11 +266,11 @@ An example of the output of this command:
{
Node: 'nyc1-worker-3',
Address: '162.243.162.229',
},
}
]
```
### <a name="service"></a>Type: service
### Type: service ((#service))
The "service" watch type is used to monitor the providers
of a single service. It requires the "service" parameter
@ -280,7 +284,7 @@ This maps to the `/v1/health/service` API internally.
Here is an example configuration with a single tag:
```javascript
```json
{
"type": "service",
"service": "redis",
@ -291,7 +295,7 @@ Here is an example configuration with a single tag:
Here is an example configuration with multiple tags:
```javascript
```json
{
"type": "service",
"service": "redis",
@ -304,16 +308,20 @@ Or, using the watch command:
Single tag:
$ consul watch -type=service -service=redis -tag=bar /usr/bin/my-service-handler.sh
```shell
$ consul watch -type=service -service=redis -tag=bar /usr/bin/my-service-handler.sh
```
Multiple tag:
$ consul watch -type=service -service=redis -tag=bar -tag=foo /usr/bin/my-service-handler.sh
```shell
$ consul watch -type=service -service=redis -tag=bar -tag=foo /usr/bin/my-service-handler.sh
```
An example of the output of this command:
```javascript
;[
```text
[
{
Node: {
Node: 'foobar',
@ -347,11 +355,11 @@ An example of the output of this command:
ServiceName: '',
},
],
},
}
]
```
### <a name="checks"></a>Type: checks
### Type: checks ((#checks))
The "checks" watch type is used to monitor the checks of a given
service or those in a specific state. It optionally takes the "service"
@ -385,11 +393,15 @@ Or, using the watch command:
State:
$ consul watch -type=checks -state=passing /usr/bin/my-check-handler.sh -passing
```shell
$ consul watch -type=checks -state=passing /usr/bin/my-check-handler.sh -passing
```
Service:
$ consul watch -type=checks -service=redis /usr/bin/my-check-handler.sh -redis
```shell
$ consul watch -type=checks -service=redis /usr/bin/my-check-handler.sh -redis
```
An example of the output of this command:
@ -408,7 +420,7 @@ An example of the output of this command:
]
```
### <a name="event"></a>Type: event
### Type: event ((#event))
The "event" watch type is used to monitor for custom user
events. These are fired using the [consul event](/docs/commands/event.html) command.
@ -419,7 +431,7 @@ This maps to the `/v1/event/list` API internally.
Here is an example configuration:
```javascript
```json
{
"type": "event",
"name": "web-deploy",
@ -429,11 +441,13 @@ Here is an example configuration:
Or, using the watch command:
$ consul watch -type=event -name=web-deploy /usr/bin/my-event-handler.sh -web-deploy
```shell
$ consul watch -type=event -name=web-deploy /usr/bin/my-event-handler.sh -web-deploy
```
An example of the output of this command:
```javascript
```json
[
{
"ID": "f07f3fcc-4b7d-3a7c-6d1e-cf414039fcee",
@ -451,4 +465,6 @@ An example of the output of this command:
To fire a new `web-deploy` event the following could be used:
$ consul event -name=web-deploy 1609030
```shell
$ consul event -name=web-deploy 1609030
```

View File

@ -15,7 +15,7 @@ If you are getting an error message you don't see listed on this page, please co
### Multiple network interfaces
```
```text
Multiple private IPv4 addresses found. Please configure one with 'bind' and/or 'advertise'.
```
@ -25,21 +25,21 @@ Your server has multiple active network interfaces. Consul needs to know which i
### Configuration syntax errors
```
```text
Error parsing config.hcl: At 1:12: illegal char
```
```
```text
Error parsing config.hcl: At 1:32: key 'foo' expected start of object ('{') or assignment ('=')
```
```
```text
Error parsing server.json: invalid character '`' looking for beginning of value
```
There is a syntax error in your configuration file. If the error message doesn't identify the exact location in the file where the problem is, try using [jq] to find it, for example:
```
```shell
$ consul agent -server -config-file server.json
==> Error parsing server.json: invalid character '`' looking for beginning of value
$ cat server.json | jq .
@ -48,7 +48,7 @@ parse error: Invalid numeric literal at line 3, column 29
## Invalid host name
```
```text
Node name "consul_client.internal" will not be discoverable via DNS due to invalid characters.
```
@ -56,11 +56,11 @@ Add the [`node name`][node_name] option to your agent configuration and provide
## I/O timeouts
```
```text
Failed to join 10.0.0.99: dial tcp 10.0.0.99:8301: i/o timeout
```
```
```text
Failed to sync remote state: No cluster leader
```
@ -70,7 +70,7 @@ If they are not on the same LAN, check the [`retry_join`][retry_join] settings i
## Deadline exceeded
```
```text
Error getting server health from "XXX": context deadline exceeded
```
@ -78,11 +78,11 @@ These error messages indicate a general performance problem on the Consul server
## Too many open files
```
```text
Error accepting TCP connection: accept tcp [::]:8301: too many open files in system
```
```
```text
Get http://localhost:8500/: dial tcp 127.0.0.1:31643: socket: too many open files
```
@ -100,7 +100,7 @@ This has been a [known issue](https://github.com/docker/libnetwork/issues/1204)
## ACL Not Found
```
```text
RPC error making call: rpc error making call: ACL not found
```
@ -110,11 +110,11 @@ This indicates that you have ACL enabled in your cluster, but you aren't passing
### Incorrect certificate or certificate name
```
```text
Remote error: tls: bad certificate
```
```
```text
X509: certificate signed by unknown authority
```
@ -124,11 +124,11 @@ If you generate your own certificates, make sure the server certificates include
### HTTP instead of HTTPS
```
```text
Error querying agent: malformed HTTP response
```
```
```text
Net/http: HTTP/1.x transport connection broken: malformed HTTP response "\x15\x03\x01\x00\x02\x02"
```
@ -140,7 +140,7 @@ If you are interacting with the API, change the URI scheme to "https".
## License warnings
```
```text
License: expiration time: YYYY-MM-DD HH:MM:SS -0500 EST, time left: 29m0s
```

View File

@ -29,9 +29,7 @@ is "deny all", then all Connect connections are denied by default.
## Intention Basics
Intentions can be managed via the
[API](#),
[CLI](#),
Intentions can be managed via the [API](#), [CLI](#),
or UI. Please see the respective documentation for each for full details
on options, flags, etc.
Below is an example of a basic intention to show the basic attributes
@ -70,7 +68,7 @@ Arbitrary string key/value data may be associated with intentions. This
is unused by Consul but can be used by external systems or for visibility
in the UI.
```
```shell
$ consul intention create \
-deny \
-meta description='Hello there' \

View File

@ -9,10 +9,10 @@ description: >-
you can plug in a gateway of your choice.
---
-> **1.6.0+:** This feature is available in Consul versions 1.6.0 and newer.
# Mesh Gateways
-> **1.6.0+:** This feature is available in Consul versions 1.6.0 and newer.
Mesh gateways enable routing of Connect traffic between different Consul datacenters. Those datacenters
can reside in different clouds or runtime environments where general interconnectivity between all services
in all datacenters isn't feasible. These gateways operate by sniffing the SNI header out of the Connect session

View File

@ -43,9 +43,7 @@ An overview of the sequence is shown below. The diagram and the following
details may seem complex, but this is a _regular mutual TLS connection_ with
an API call to verify the incoming client certificate.
<div class="center">
![Native Integration Overview](connect-native-overview.png)
</div>
![Native Integration Overview](/img/connect-native-overview.png)
Details on the steps are below:

View File

@ -42,8 +42,7 @@ compatible Envoy versions.
| 1.5.0, 1.5.1 | 1.9.1, 1.8.0† |
| 1.3.x, 1.4.x | 1.9.1, 1.8.0†, 1.7.0† |
~> Note:
† Envoy versions lower than 1.9.1 are vulnerable to
~> **Note:** † Envoy versions lower than 1.9.1 are vulnerable to
[CVE-2019-9900](https://github.com/envoyproxy/envoy/issues/6434) and
[CVE-2019-9901](https://github.com/envoyproxy/envoy/issues/6435). Both are
related to HTTP request parsing and so only affect Consul Connect users if they
@ -115,16 +114,15 @@ definition](/docs/connect/registration/service-registration.html) or
users can configure this property once in the [global `proxy-defaults`
configuration entry](/docs/agent/config-entries/proxy-defaults.html) for convenience. Currently, TCP is not supported.
~> **Note:** currently the url **must use an ip address** not a dns name due
to the way Envoy is setup for StatsD.
Users can also specify the whole parameter in the form `$ENV_VAR_NAME`, which
will cause the `consul connect envoy` command to resolve the actual URL from
the named environment variable when it runs. This, for example, allows each
pod in a Kubernetes cluster to learn of a pod-specific IP address for StatsD
when the Envoy instance is bootstrapped while still allowing global
configuration of all proxies to use StatsD in the [global `proxy-defaults`
~> **Note:** currently the url **must use an ip address** not a dns name due
to the way Envoy is setup for StatsD.
Users can also specify the whole parameter in the form `$ENV_VAR_NAME`, which
will cause the `consul connect envoy` command to resolve the actual URL from
the named environment variable when it runs. This, for example, allows each
pod in a Kubernetes cluster to learn of a pod-specific IP address for StatsD
when the Envoy instance is bootstrapped while still allowing global
configuration of all proxies to use StatsD in the [global `proxy-defaults`
configuration entry](/docs/agent/config-entries/proxy-defaults.html). The env variable must contain a full valid URL
value as specified above and nothing else. It is not currently possible to use
environment variables as only part of the URL.

View File

@ -198,10 +198,7 @@ Telegraf.
Here is an example Grafana dashboard:
<div class="center">
[![Grafana Consul
Cluster](/img/grafana-screenshot.png)](/img/grafana-screenshot.png)
</div>
[![Grafana Consul Cluster](/img/grafana-screenshot.png)](/img/grafana-screenshot.png)
## Metric Aggregates and Alerting from Telegraf

View File

@ -4,7 +4,7 @@ page_title: Ambassador Integration - Kubernetes
sidebar_title: 'Ambassador Integration'
sidebar_current: docs-platform-k8s-ambassador
description: |-
Ambassador is a Kubernetes-native API gateway and ingress controller that
Ambassador is a Kubernetes-native API gateway and ingress controller that
integrates well with Consul Connect.
---
@ -94,7 +94,6 @@ should display the output from the static service.
```bash
kubectl describe service ambassador
```
## Enabling end-to-end TLS
@ -157,7 +156,9 @@ You should now be able to test the SSL connection from your browser.
When Ambassador is unable to establish an authenticated connection to the Connect proxy servers, browser connections will display this message:
upstream connect error or disconnect/reset before headers
```text
upstream connect error or disconnect/reset before headers
```
This error can have a number of different causes. Here are some things to check and troubleshooting steps you can take.
@ -167,88 +168,112 @@ If you followed the above installation guide, Consul should have registered a se
To check whether Ambassador is allowed to connect, use the [`intention check`][intention-check] subcommand.
$ consul intention check ambassador http-echo
Allowed
```shell
$ consul intention check ambassador http-echo
Allowed
```
### Confirm upstream proxy sidecar is running
First, find the name of the pod that contains your service.
$ kubectl get pods -l app=http-echo,role=server
NAME READY STATUS RESTARTS AGE
http-echo-7fb79566d6-jmccp 2/2 Running 0 1d
```shell
$ kubectl get pods -l app=http-echo,role=server
NAME READY STATUS RESTARTS AGE
http-echo-7fb79566d6-jmccp 2/2 Running 0 1d
```
Then describe the pod to make sure that the sidecar is present and running.
$ kubectl describe pod http-echo-7fb79566d6-jmccp
[...]
Containers:
consul-connect-envoy-sidecar:
[...]
State: Running
Ready: True
```shell
$ kubectl describe pod http-echo-7fb79566d6-jmccp
[...]
Containers:
consul-connect-envoy-sidecar:
[...]
State: Running
Ready: True
```
### Start up a downstream proxy and try connecting to it
Log into one of your Consul server pods (or any pod that has a Consul binary in it).
$ kubectl exec -ti consul-server-0 -- /bin/sh
```shell
$ kubectl exec -ti consul-server-0 -- /bin/sh
```
Once inside the pod, try starting a test proxy. Use the name of your service in place of `http-echo`.
# consul connect proxy -service=ambassador -upstream http-echo:1234
==> Consul Connect proxy starting...
Configuration mode: Flags
Service: http-echo-client
Upstream: http-echo => :1234
Public listener: Disabled
```shell
# consul connect proxy -service=ambassador -upstream http-echo:1234
==> Consul Connect proxy starting...
Configuration mode: Flags
Service: http-echo-client
Upstream: http-echo => :1234
Public listener: Disabled
```
If the proxy starts successfully, try connecting to it. Verify the output is as you expect.
# curl localhost:1234
"hello world"
```shell
# curl localhost:1234
"hello world"
```
Don't forget to kill the test proxy when you're done.
# kill %1
==> Consul Connect proxy shutdown
```shell
# kill %1
==> Consul Connect proxy shutdown
# exit
# exit
```
### Check Ambassador Connect sidecar logs
Find the name of the Connect Integration pod and make sure it is running.
$ kubectl get pods -l app=ambassador-pro,component=consul-connect
NAME READY STATUS RESTARTS AGE
ambassador-pro-consul-connect-integration-f88fcb99f-hxk75 1/1 Running 0 1d
```shell
$ kubectl get pods -l app=ambassador-pro,component=consul-connect
NAME READY STATUS RESTARTS AGE
ambassador-pro-consul-connect-integration-f88fcb99f-hxk75 1/1 Running 0 1d
```
Dump the logs from the integration pod. If the service is running correctly, there won't be much in there.
$ kubectl logs ambassador-pro-consul-connect-integration-f88fcb99f-hxk75
```shell
$ kubectl logs ambassador-pro-consul-connect-integration-f88fcb99f-hxk75
time="2019-03-13T19:42:12Z" level=info msg="Starting Consul Connect Integration" consul_host=10.142.0.21 consul_port=8500 version=0.2.3
2019/03/13 19:42:12 Watching CA leaf for ambassador
time="2019-03-13T19:42:12Z" level=debug msg="Computed kubectl command and arguments" args="[kubectl apply -f -]"
time="2019-03-13T19:42:14Z" level=info msg="Updating TLS certificate secret" namespace= secret=ambassador-consul-connect
time="2019-03-13T19:42:12Z" level=info msg="Starting Consul Connect Integration" consul_host=10.142.0.21 consul_port=8500 version=0.2.3
2019/03/13 19:42:12 Watching CA leaf for ambassador
time="2019-03-13T19:42:12Z" level=debug msg="Computed kubectl command and arguments" args="[kubectl apply -f -]"
time="2019-03-13T19:42:14Z" level=info msg="Updating TLS certificate secret" namespace= secret=ambassador-consul-connect
```
### Check Ambassador logs
Make sure the Ambassador pod itself is running.
$ kubectl get pods -l service=ambassador
NAME READY STATUS RESTARTS AGE
ambassador-655875b5d9-vpc2v 2/2 Running 0 1d
```shell
$ kubectl get pods -l service=ambassador
NAME READY STATUS RESTARTS AGE
ambassador-655875b5d9-vpc2v 2/2 Running 0 1d
```
Finally, check the logs for the main Ambassador pod.
$ kubectl logs ambassador-655875b5d9-vpc2v
```shell
$ kubectl logs ambassador-655875b5d9-vpc2v
```
### Check Ambassador admin interface
Forward the admin port from the Ambassador pod to your local machine.
$ kubectl port-forward pods/ambassador-655875b5d9-vpc2v 8877:8877
```shell
$ kubectl port-forward pods/ambassador-655875b5d9-vpc2v 8877:8877
```
You should then be able to open http://localhost:8877/ambassador/v0/diag/ in your browser and view Ambassador's routing table. The table lists each URL mapping that has been set up. Service names will appear in green if Ambassador believes they are healthy, and red otherwise.

View File

@ -1,7 +1,7 @@
---
layout: docs
page_title: Connect Sidecar - Kubernetes
sidebar_title: 'Kubernetes'
sidebar_title: 'Connect Sidecar'
sidebar_current: docs-platform-k8s-connect
description: >-
Connect is a feature built into to Consul that enables automatic
@ -441,19 +441,19 @@ There are three options available:
1. **Single Destination Namespace** Register all Kubernetes pods, regardless of namespace,
into the same Consul namespace.
This can be configured with:
This can be configured with:
```yaml
global:
enableConsulNamespaces: true
```yaml
global:
enableConsulNamespaces: true
connectInject:
enabled: true
consulNamespaces:
consulDestinationNamespace: "my-consul-ns"
```
connectInject:
enabled: true
consulNamespaces:
consulDestinationNamespace: 'my-consul-ns'
```
-> **NOTE:** If the destination namespace does not exist we will create it.
-> **NOTE:** If the destination namespace does not exist we will create it.
1. **Mirror Namespaces** - Register each Kubernetes pod into a Consul namespace with the same name as its Kubernetes namespace.
For example, pod `foo` in Kubernetes namespace `ns-1` will be synced to the Consul namespace `ns-1`.
@ -461,34 +461,32 @@ There are three options available:
This can be configured with:
````yaml
```yaml
global:
enableConsulNamespaces: true
enableConsulNamespaces: true
connectInject:
enabled: true
consulNamespaces:
mirroringK8S: true
```
````
connectInject:
enabled: true
consulNamespaces:
mirroringK8S: true
```
1. **Mirror Namespaces With Prefix** - Register each Kubernetes pod into a Consul namespace with the same name as its Kubernetes
namespace **with a prefix**.
For example, given a prefix `k8s-`, pod `foo` in Kubernetes namespace `ns-1` will be synced to the Consul namespace `k8s-ns-1`.
This can be configured with:
This can be configured with:
```yaml
global:
enableConsulNamespaces: true
```yaml
global:
enableConsulNamespaces: true
connectInject:
enabled: true
consulNamespaces:
mirroringK8S: true
mirroringK8SPrefix: "k8s-"
```
connectInject:
enabled: true
consulNamespaces:
mirroringK8S: true
mirroringK8SPrefix: 'k8s-'
```
### Consul Enterprise Namespace Upstreams

File diff suppressed because it is too large Load Diff

View File

@ -51,18 +51,16 @@ Kubernetes can choose to leverage Consul.
There are several ways to try Consul with Kubernetes in different environments.
Guides
**Guides**
- The [Getting Started with Consul Service Mesh track](https://learn.hashicorp.com/consul/gs-consul-service-mesh/understand-consul-service-mesh?utm_source=WEBSITE&utm_medium=WEB_IO&utm_offer=ARTICLE_PAGE&utm_content=DOCS)
walks you through installing Consul as service mesh for Kubernetes using the Helm
chart, deploying services in the service mesh, and using intentions to secure service
communications.
- The [Consul and minikube guide](https://learn.hashicorp.com/consul/
getting-started-k8s/minikube?utm_source=consul.io&utm_medium=docs) is a quick walk through of how to deploy Consul with the official Helm chart on a local instance of Minikube.
- The [Consul and minikube guide](https://learn.hashicorp.com/consul/getting-started-k8s/minikube?utm_source=consul.io&utm_medium=docs) is a quick walk through of how to deploy Consul with the official Helm chart on a local instance of Minikube.
- The [Deploying Consul with Kubernetes guide](https://learn.hashicorp.com/
consul/getting-started-k8s/minikube?utm_source=consul.io&utm_medium=docs)
- The [Deploying Consul with Kubernetes guide](https://learn.hashicorp.com/consul/getting-started-k8s/minikube?utm_source=consul.io&utm_medium=docs)
walks you through deploying Consul on Kubernetes with the official Helm chart and can be applied to any Kubernetes installation type.
- Review production best practices and cloud-specific configurations for deploying Consul on managed Kubernetes runtimes.
@ -76,7 +74,7 @@ Guides
- The [Consul and Kubernetes Deployment](https://learn.hashicorp.com/consul/day-1-operations/kubernetes-deployment-guide?utm_source=consul.io&utm_medium=docs) guide covers the necessary steps to install and configure a new Consul cluster on Kubernetes in production.
Documentation
**Documentation**
- [Installing Consul](/docs/platform/k8s/run.html) covers how to install Consul using the Helm chart.
- [Helm Chart Reference](/docs/platform/k8s/helm.html) describes the different options for configuring the Helm chart.

View File

@ -10,7 +10,7 @@ description: Using predefined Persistent Volume Claims
The only way to use a pre-created PVC is to name them in the format Kubernetes expects:
```
```text
data-<kubernetes namespace>-<helm release name>-consul-server-<ordinal>
```
@ -20,7 +20,7 @@ need as many PVCs as you have Consul servers. For example, given a Kubernetes
namespace of "vault," a release name of "consul," and 5 servers, you would need
to create PVCs with the following names:
```
```text
data-vault-consul-consul-server-0
data-vault-consul-consul-server-1
data-vault-consul-consul-server-2

View File

@ -20,21 +20,20 @@ If you're **not using Consul Connect**, follow this process.
1. Run a Helm upgrade with the following config:
```yaml
global:
tls:
enabled: true
# This configuration sets `verify_outgoing`, `verify_server_hostname`,
# and `verify_incoming` to `false` on servers and clients,
# which allows TLS-disabled nodes to join the cluster.
verify: false
server:
updatePartition: <number_of_server_replicas>
```
```yaml
global:
tls:
enabled: true
# This configuration sets `verify_outgoing`, `verify_server_hostname`,
# and `verify_incoming` to `false` on servers and clients,
# which allows TLS-disabled nodes to join the cluster.
verify: false
server:
updatePartition: <number_of_server_replicas>
```
This upgrade will trigger a rolling update of the clients, as well as any
other `consul-k8s` components, such as sync catalog or client snapshot deployments.
This upgrade will trigger a rolling update of the clients, as well as any
other `consul-k8s` components, such as sync catalog or client snapshot deployments.
1. Perform a rolling upgrade of the servers, as described in
[Upgrade Consul Servers](/docs/platform/k8s/upgrading.html#upgrading-consul-servers).
@ -58,25 +57,24 @@ applications to it.
1. Create the following Helm config file for the upgrade:
```yaml
global:
tls:
enabled: true
# This configuration sets `verify_outgoing`, `verify_server_hostname`,
# and `verify_incoming` to `false` on servers and clients,
# which allows TLS-disabled nodes to join the cluster.
verify: false
server:
updatePartition: <number_of_server_replicas>
client:
updateStrategy: |
type: OnDelete
```
```yaml
global:
tls:
enabled: true
# This configuration sets `verify_outgoing`, `verify_server_hostname`,
# and `verify_incoming` to `false` on servers and clients,
# which allows TLS-disabled nodes to join the cluster.
verify: false
server:
updatePartition: <number_of_server_replicas>
client:
updateStrategy: |
type: OnDelete
```
In this configuration, we're setting `server.updatePartition` to the number of
server replicas as described in [Upgrade Consul Servers](/docs/platform/k8s/upgrading.html#upgrading-consul-servers)
and `client.updateStrategy` to `OnDelete` to manually trigger an upgrade of the clients.
In this configuration, we're setting `server.updatePartition` to the number of
server replicas as described in [Upgrade Consul Servers](/docs/platform/k8s/upgrading.html#upgrading-consul-servers)
and `client.updateStrategy` to `OnDelete` to manually trigger an upgrade of the clients.
1. Run `helm upgrade` with the above config file. The upgrade will trigger an update of all
components except clients and servers, such as the Consul Connect webhook deployment

View File

@ -313,56 +313,57 @@ There are three options available:
1. **Single Destination Namespace** Sync all Kubernetes services, regardless of namespace,
into the same Consul namespace.
This can be configured with:
This can be configured with:
```yaml
global:
enableConsulNamespaces: true
```yaml
global:
enableConsulNamespaces: true
syncCatalog:
enabled: true
consulNamespaces:
consulDestinationNamespace: "my-consul-ns"
```
syncCatalog:
enabled: true
consulNamespaces:
consulDestinationNamespace: 'my-consul-ns'
```
1. **Mirror Namespaces** - Each Kubernetes service will be synced to a Consul namespace with the same
name as its Kubernetes namespace.
1. **Mirror Namespaces** - Each Kubernetes service will be synced to a Consul namespace with the same name as its Kubernetes namespace.
For example, service `foo` in Kubernetes namespace `ns-1` will be synced to the Consul namespace `ns-1`.
If a mirrored namespace does not exist in Consul, it will be created.
This can be configured with:
````yaml
```yaml
global:
enableConsulNamespaces: true
enableConsulNamespaces: true
syncCatalog:
enabled: true
consulNamespaces:
mirroringK8S: true
syncCatalog:
enabled: true
consulNamespaces:
mirroringK8S: true
addK8SNamespaceSuffix: false
```
addK8SNamespaceSuffix: false
```
````
1. **Mirror Namespaces With Prefix** - Each Kubernetes service will be synced to a Consul namespace
with the same name as its Kubernetes namespace **with a prefix**. For example, given a prefix
`k8s-`, service `foo` in Kubernetes namespace `ns-1` will be synced to the Consul namespace
`k8s-ns-1`.
1. **Mirror Namespaces With Prefix** - Each Kubernetes service will be synced to a Consul namespace with the same name as its Kubernetes
namespace **with a prefix**.
For example, given a prefix `k8s-`, service `foo` in Kubernetes namespace `ns-1` will be synced to the Consul namespace `k8s-ns-1`.
This can be configured with:
This can be configured with:
```yaml
global:
enableConsulNamespaces: true
```yaml
global:
enableConsulNamespaces: true
syncCatalog:
enabled: true
consulNamespaces:
mirroringK8S: true
mirroringK8SPrefix: 'k8s-'
syncCatalog:
enabled: true
consulNamespaces:
mirroringK8S: true
mirroringK8SPrefix: "k8s-"
addK8SNamespaceSuffix: false
```
addK8SNamespaceSuffix: false
```
-> Note that in both mirroring examples we're setting `addK8SNamespaceSuffix: false`. If set to `true`
(the default), the Kubernetes namespace will be added as a suffix to each

View File

@ -14,10 +14,7 @@ The HashiCorp Consul Integration Program enables vendors to build integrations w
By leveraging Consuls RESTful HTTP API system, vendors are able to build extensible integrations at the data plane, platform, and the infrastructure layer to extend Consuls functionalities. These integrations can be performed with the OSS (open source) version of Consul. Integrations with advanced network segmentation, advanced federation, and advanced read scalability need to be tested against Consul Enterprise, since these features are only supported by Consul Enterprise.
<div class="center">
[![Consul
Architecture](/img/consul_ecosystem_diagram.png)](/img/consul_ecosystem_diagram.png)
</div>
[![Consul Architecture](/img/consul_ecosystem_diagram.png)](/img/consul_ecosystem_diagram.png)
**Data Plane**: These integrations automate IP updates of load balancers by leveraging Consul service discovery, automate firewall security policy updates by leveraging Consul intentions within a centralized management tool, extend sidecar proxies to support Consul connect, and extend API gateways to allow Consul to route incoming traffic to the proxies for Connect-enabled services.