docs: Fix spelling errors across site (#12973)

This commit is contained in:
Blake Covarrubias 2022-05-10 07:28:33 -07:00 committed by GitHub
parent f4e1ade46a
commit 8edee753d1
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
37 changed files with 79 additions and 79 deletions

View File

@ -778,7 +778,7 @@ Valid time units are 'ns', 'us' (or 'µs'), 'ms', 's', 'm', 'h'."
is updated. See the [watch documentation](/docs/dynamic-app-config/watches) for more detail.
Watches can be modified when the configuration is reloaded.
## ACL Paramters
## ACL Parameters
- `acl` ((#acl)) - This object allows a number of sub-keys to be set which
controls the ACL system. Configuring the ACL system within the ACL stanza was added
@ -1618,7 +1618,7 @@ Valid time units are 'ns', 'us' (or 'µs'), 'ms', 's', 'm', 'h'."
- `serf_wan_allowed_cidrs` ((#serf_wan_allowed_cidrs)) Equivalent to the [`-serf-wan-allowed-cidrs` command-line flag](/docs/agent/config/cli-flags#_serf_wan_allowed_cidrs).
## Telemetry Paramters
## Telemetry Parameters
- `telemetry` This is a nested object that configures where
Consul sends its runtime telemetry, and contains the following keys:

View File

@ -44,7 +44,7 @@ If Consul is instead deployed with 5 servers, the quorum size increases to 3, so
the fault tolerance increases to 2.
To learn more about the relationship between the
number of servers, quorum, and fault tolerance, refer to the
[concensus protocol documentation](/docs/architecture/consensus#deployment_table).
[consensus protocol documentation](/docs/architecture/consensus#deployment_table).
Effectively mitigating your risk is more nuanced than just increasing the fault tolerance
metric described above. You must consider:

View File

@ -27,7 +27,7 @@ From a 10,000 foot altitude the architecture of Consul looks like this:
Let's break down this image and describe each piece. First of all, we can see
that there are two datacenters, labeled "one" and "two". Consul has first
class support for [multiple datacenters](https://learn.hashicorp.com/tutorials/consul/federarion-gossip-wan) and
class support for [multiple datacenters](https://learn.hashicorp.com/tutorials/consul/federation-gossip-wan) and
expects this to be the common case.
Within each datacenter, we have a mixture of clients and servers. It is expected

View File

@ -148,7 +148,7 @@ The configuration options are listed below.
When WAN Federation is enabled, every secondary
datacenter must specify a unique `intermediate_pki_path`.
- `IntermediatePKINamespace` / `intermedial_pki_namespace` (`string: <optional>`) - The absolute namespace
- `IntermediatePKINamespace` / `intermediate_pki_namespace` (`string: <optional>`) - The absolute namespace
that the `IntermediatePKIPath` is in. Setting this overrides the `Namespace` option for the `IntermediatePKIPath`. Introduced in 1.12.1
- `CAFile` / `ca_file` (`string: ""`) - Specifies an optional path to the CA

View File

@ -64,7 +64,7 @@ TransparentProxy {
DialedDirectly = <true if proxy instances should be dialed directly>
}
MeshGateway {
Mode = "<name of mesh gatweay configuration for all proxies>"
Mode = "<name of mesh gateway configuration for all proxies>"
}
Expose {
Checks = <true to expose all HTTP and gRPC checks through Envoy>
@ -98,7 +98,7 @@ spec:
outboundListenerPort: <port the proxy should listen on for outbound traffic>
dialedDirectly: <true if proxy instances should be dialed directly>
meshGateway:
mode: <name of mesh gatweay configuration for all proxies>
mode: <name of mesh gateway configuration for all proxies>
expose:
checks: <true to expose all HTTP and gRPC checks through Envoy>
paths:
@ -127,7 +127,7 @@ spec:
"DialedDirectly": <true if proxy instances should be dialed directly>
},
"MeshGateway": {
"Mode": = "<name of mesh gatweay configuration for all proxies>"
"Mode": = "<name of mesh gateway configuration for all proxies>"
},
"Expose": {
"Checks": <true to expose all HTTP and gRPC checks through Envoy>,
@ -171,7 +171,7 @@ TransparentProxy {
DialedDirectly = <true if proxy instances should be dialed directly>
}
MeshGateway {
Mode = "<name of mesh gatweay configuration for all proxies>"
Mode = "<name of mesh gateway configuration for all proxies>"
}
Expose {
Checks = <true to expose all HTTP and gRPC checks through Envoy>
@ -206,7 +206,7 @@ spec:
outboundListenerPort: <port the proxy should listen on for outbound traffic>
dialedDirectly: <true if proxy instances should be dialed directly>
meshGateway:
mode: <name of mesh gatweay configuration for all proxies>
mode: <name of mesh gateway configuration for all proxies>
expose:
checks: <true to expose all HTTP and gRPC checks through Envoy>
paths:
@ -236,7 +236,7 @@ spec:
"DialedDirectly": <true if proxy instances should be dialed directly>
},
"MeshGateway": {
"Mode": = "<name of mesh gatweay configuration for all proxies>"
"Mode": = "<name of mesh gateway configuration for all proxies>"
},
"Expose": {
"Checks": <true to expose all HTTP and gRPC checks through Envoy>,

View File

@ -291,7 +291,7 @@ spec:
### Retry logic
Enable retry logic by delagating this resposbility to Consul and the proxy. Review the [`ServiceRouteDestination`](#serviceroutedestination) block for more details.
Enable retry logic by delegating this responsibility to Consul and the proxy. Review the [`ServiceRouteDestination`](#serviceroutedestination) block for more details.
<CodeTabs tabs={[ "HCL", "Kubernetes YAML", "JSON" ]}>

View File

@ -32,10 +32,10 @@ Ensure that your Consul environment meets the following requirements.
* A local Consul agent is required to manage its configuration.
* Consul [Connect](/docs/agent/config/config-files#connect) must be enabled in both datacenters.
* Each [datacenter](/docs/agent/config/config-files#datacenter) must have a unique name.
* Each datacenters must be [WAN joined](https://learn.hashicorp.com/tutorials/consul/federarion-gossip-wan).
* Each datacenters must be [WAN joined](https://learn.hashicorp.com/tutorials/consul/federation-gossip-wan).
* The [primary datacenter](/docs/agent/config/config-files#primary_datacenter) must be set to the same value in both datacenters. This specifies which datacenter is the authority for Connect certificates and is required for services in all datacenters to establish mutual TLS with each other.
* [gRPC](/docs/agent/config/config-files#grpc_port) must be enabled.
* If you want to [enable gateways globally](/docs/connect/mesh-gateway#enabling-gateways-globally) you must enable [centralized configuration](/docs/agent/config/config-files#enable_central_service_config).
* If you want to [enable gateways globally](/docs/connect/gateways/mesh-gateway/service-to-service-traffic-datacenters#enabling-gateways-globally) you must enable [centralized configuration](/docs/agent/config/config-files#enable_central_service_config).
### Network

View File

@ -15,7 +15,7 @@ WAN federation via mesh gateways allows for Consul servers in different datacent
to be federated exclusively through mesh gateways.
When setting up a
[multi-datacenter](https://learn.hashicorp.com/tutorials/consul/federarion-gossip-wan)
[multi-datacenter](https://learn.hashicorp.com/tutorials/consul/federation-gossip-wan)
Consul cluster, operators must ensure that all Consul servers in every
datacenter must be directly connectable over their WAN-advertised network
address from each other.
@ -102,7 +102,7 @@ each datacenter otherwise the WAN will become only partly connected.
There are a few necessary additional pieces of configuration beyond those
required for standing up a
[multi-datacenter](https://learn.hashicorp.com/tutorials/consul/federarion-gossip-wan)
[multi-datacenter](https://learn.hashicorp.com/tutorials/consul/federation-gossip-wan)
Consul cluster.
Consul servers in the _primary_ datacenter should add this snippet to the

View File

@ -30,7 +30,7 @@ Envoy must be run with the `--max-obj-name-len` option set to `256` or greater f
## Supported Versions
The following matrix describes Envoy compatibility for the currently supported **n-2 major Consul releases**. For previous Consul version compatability please view the respective versioned docs for this page.
The following matrix describes Envoy compatibility for the currently supported **n-2 major Consul releases**. For previous Consul version compatibility please view the respective versioned docs for this page.
Consul supports **four major Envoy releases** at the beginning of each major Consul release. Consul maintains compatibility with Envoy patch releases for each major version so that users can benefit from bug and security fixes in Envoy. As a policy, Consul will add support for a new major versions of Envoy in a Consul major release. Support for newer versions of Envoy will not be added to existing releases.

View File

@ -299,7 +299,7 @@ services {
</CodeTabs>
When performing an SRV query for this servie, the SRV response contains a single
When performing an SRV query for this service, the SRV response contains a single
record with a hostname in the format of `<hexadecimal-encoded IP>.addr.<datacenter>.consul`.
```shell-session

View File

@ -28,7 +28,7 @@ Before deploying the ACL controller for the first time, you must [create the fol
| Bootstrap ACL Token | Set | `my-consul-bootstrap-token` |
| Consul Client ACL Token | Empty | `<PREFIX>-consul-client-token` |
The secret for the client token must be intially empty. The ACL controller creates the client token in Consul
The secret for the client token must be initially empty. The ACL controller creates the client token in Consul
and stores the token in Secrets Manager. In the secret name, `<PREFIX>` should be replaced with the
[secret name prefix](/docs/ecs/manual/acl-controller#secret-name-prefix) of your choice.

View File

@ -64,7 +64,7 @@ during task startup.
| ---------------------- | ------ | ------------------------------------------------------------------------------------------------------------------ |
| `family` | string | The task family name. This is used as the Consul service name by default. |
| `networkMode` | string | Must be `awsvpc`, which is the only network mode supported by Consul on ECS. |
| `volumes` | list | Must be defined as shown above. Volumes are used to share configuration between containers for intial task setup. |
| `volumes` | list | Must be defined as shown above. Volumes are used to share configuration between containers for initial task setup. |
| `containerDefinitions` | list | The list of containers to run in this task (see [Application container](#application-container)). |
### Task Tags
@ -551,4 +551,4 @@ and `consul-ecs-mesh-init` containers.
* Create the task definition using the [AWS Console](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecs-taskdefinition.html) or the [AWS CLI](https://docs.aws.amazon.com/cli/latest/reference/ecs/register-task-definition.html), or another method of your choice.
* Create an [ECS Service](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html) to start tasks using the task definition.
* Follow the [Secure Configration](/docs/ecs/manual/secure-configuration) to get production-ready.
* Follow the [Secure Configuration](/docs/ecs/manual/secure-configuration) to get production-ready.

View File

@ -2,7 +2,7 @@
layout: docs
page_title: Secure Configuration - AWS ECS
description: >-
Manual Secure Confguration of the Consul Service Mesh on AWS ECS (Elastic Container Service).
Manual Secure Configuration of the Consul Service Mesh on AWS ECS (Elastic Container Service).
---
# Secure Configuration
@ -181,12 +181,12 @@ EOF
The following table describes the additional fields that must be included in the Consul client configuration file.
| Field name | Type | Description |
| --------------------------------------------------------------------- | ------- | ------------------------------------------------------------------------------------ |
| [`encrypt`](/docs/agent/options#_encrypt) | string | Specifies the gossip encryption key |
| [`tls.defaults.ca_file`](/docs/agent/options#tls_defaults_ca_file) | string | Specifies the Consul server CA cert for TLS verification. |
| [`acl.enabled`](/docs/agent/options#acl_enabled) | boolean | Enable ACLs for this agent. |
| [`acl.tokens.agent`](/docs/agent/options#acl_tokens_agent) | string | Specifies the Consul client token which authorizes this agent with Consul servers. |
| Field name | Type | Description |
| -------------------------------------------------------------------------------| ------- | ------------------------------------------------------------------------------------ |
| [`encrypt`](/docs/agent/config/cli-flags#_encrypt) | string | Specifies the gossip encryption key |
| [`tls.defaults.ca_file`](/docs/agent/config/config-files#tls_defaults_ca_file) | string | Specifies the Consul server CA cert for TLS verification. |
| [`acl.enabled`](/docs/agent/config/config-files#acl_enabled) | boolean | Enable ACLs for this agent. |
| [`acl.tokens.agent`](/docs/agent/config/config-files#acl_tokens_agent) | string | Specifies the Consul client token which authorizes this agent with Consul servers. |
## Configure `consul-ecs-mesh-init` and `consul-ecs-health-sync`

View File

@ -14,7 +14,7 @@ component in depth.
We used the following procedure to measure resource usage:
- Executed performance tests while deploying clusers of various sizes. We
- Executed performance tests while deploying clusters of various sizes. We
ensured that deployment conditions stressed Consul on ESC components.
- After each performance test session, we recorded resource usage for each
component to determine worst-case scenario resource usage in a production

View File

@ -59,7 +59,7 @@ over the WAN. Consul clients make use of resources in federated clusters by
forwarding RPCs through the Consul servers in their local cluster, but they
never interact with remote Consul servers directly. There are currently two
inter-cluster network models which can be viewed on HashiCorp Learn:
[WAN gossip (OSS)](https://learn.hashicorp.com/tutorials/consul/federarion-gossip-wan)
[WAN gossip (OSS)](https://learn.hashicorp.com/tutorials/consul/federation-gossip-wan)
and [Network Areas (Enterprise)](https://learn.hashicorp.com/tutorials/consul/federation-network-areas).
**LAN Gossip Pool**: A set of Consul agents that have full mesh connectivity

View File

@ -111,7 +111,7 @@ Consul is platform agnostic which makes it a great fit for all environments, inc
Consul is available as a [self-install](/downloads) project or as a fully managed service mesh solution called [HCP Consul](https://portal.cloud.hashicorp.com/sign-in?utm_source=consul_docs).
HCP Consul enables users to discover and securely connect services without the added operational burden of maintaining a service mesh on their own.
You can learn more about Consul by visting the Consul Learn [tutorials](https://learn.hashicorp.com/consul).
You can learn more about Consul by visiting the Consul Learn [tutorials](https://learn.hashicorp.com/consul).
## Next

View File

@ -17,7 +17,7 @@ This allows the user to configure natively configure Consul on select Kubernetes
## Annotations
Resource annotations could be used on the Kubernetes pod to control connnect-inject behavior.
Resource annotations could be used on the Kubernetes pod to control connect-inject behavior.
- `consul.hashicorp.com/connect-inject` - If this is "true" then injection
is enabled. If this is "false" then injection is explicitly disabled.

View File

@ -23,7 +23,7 @@ are configured with
rules so that they are placed on different nodes. A readiness probe is
configured that marks the pod as ready only when it has established a leader.
A Kubernetes `Service` is registered to represent the servers and exposes ports that are requried to communicate to the Consul server pods.
A Kubernetes `Service` is registered to represent the servers and exposes ports that are required to communicate to the Consul server pods.
The servers utilize the DNS address of this service to join a Consul cluster, without requiring any other access to the Kubernetes cluster. Additional consul servers may also utilize non-ready endpoints which are published by the Kubernetes service, so that servers can utilize the service for joining during bootstrap and upgrades.
Additionally, a **PodDisruptionBudget** is configured so the Consul server

View File

@ -144,7 +144,7 @@ If ACLs are enabled, update the terminating gateway acl role to have `service: w
being represented by the gateway:
- Create a new policy that includes these permissions
- Update the existing rolc to include the new policy
- Update the existing role to include the new policy
<CodeBlockConfig filename="write-policy.hcl">

View File

@ -58,6 +58,6 @@ Consul Kubernetes delivered Red Hat OpenShift support starting with Consul Helm
Consul Kubernetes is [certified](https://marketplace.cloud.vmware.com/services/details/hashicorp-consul-1?slug=true) for both VMware Tanzu Kubernetes Grid, and VMware Tanzu Kubernetes Integrated Edition.
- Tanzu Kubernetes Grid is certified for version 1.3.0 and above. Only Calico is supported as the CNI Plugin.
- Tanzu Kuberntetes Grid Integrated Edition is supported for version 1.11.1 and above. [NSX-T CNI Plugin v3.1.2](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/rn/NSX-Container-Plugin-312-Release-Notes.html) and greater should be used and configured with the `enable_hostport_snat` setting set to `true`.
- Tanzu Kubernetes Grid Integrated Edition is supported for version 1.11.1 and above. [NSX-T CNI Plugin v3.1.2](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/rn/NSX-Container-Plugin-312-Release-Notes.html) and greater should be used and configured with the `enable_hostport_snat` setting set to `true`.

View File

@ -46,8 +46,8 @@ global:
</CodeBlockConfig>
If the version of Consul is < 1.10, use the following config with the name and key of the secret you just created.
(These values arerequired on top ofyour normal configuration.)
If the version of Consul is < 1.10, use the following config with the name and key of the secret you just created.
(These values are required on top of your normal configuration.)
-> **Note:** The value of `server.enterpriseLicense.enableLicenseAutoload` must be set to `false`.

View File

@ -114,15 +114,15 @@ These instructions describe how to install a specific version of the CLI and are
Homebrew does not provide a method to install previous versions of a package. The Consul K8s CLI will need to be installed manually. Previous versions of the Consul K8s CLI could be used to install a specific version of Consul on the Kubernetes control plane. Manual upgrades to the Consul K8s CLI is also performed in the same manner, provided that the Consul K8s CLI was manually installed before.
1. Download the desired Consul K8s CLI using the following `curl` command. Enter the approriate version for your deployment via the `$VERSION` environment variable.
1. Download the desired Consul K8s CLI using the following `curl` command. Enter the appropriate version for your deployment via the `$VERSION` environment variable.
```shell-session
$ export VERSION=0.39.0 && \
curl --location "https://releases.hashicorp.com/consul-k8s/${VERSION}/consul-k8s_${VERSION}_darwin_amd64.zip" --output consul-k8s-cli.zip
```
1. Unzip the zip file ouput to extract the `consul-k8s` CLI binary. This overwrites existing files and also creates a `.consul-k8s` subdirectory in your `$HOME` folder.
1. Unzip the zip file output to extract the `consul-k8s` CLI binary. This overwrites existing files and also creates a `.consul-k8s` subdirectory in your `$HOME` folder.
```shell-session
$ unzip -o consul-k8s-cli.zip -d ~/.consul-k8s
```

View File

@ -42,7 +42,7 @@ Next, you will need to create a Vault policy that allows read access to this sec
<CodeBlockConfig filename="bootstrap-token-policy.hcl">
```HCL
path "secret/data/consul/boostrap-token" {
path "secret/data/consul/bootstrap-token" {
capabilities = ["read"]
}
```
@ -52,7 +52,7 @@ path "secret/data/consul/boostrap-token" {
Apply the Vault policy by issuing the `vault policy write` CLI command:
```shell-session
$ vault policy write boostrap-token-policy boostrap-token-policy.hcl
$ vault policy write bootstrap-token-policy bootstrap-token-policy.hcl
```
## Setup per Consul datacenter
@ -64,7 +64,7 @@ Next, you will create Kubernetes auth roles for the Consul `server-acl-init` con
$ vault write auth/kubernetes/role/consul-server-acl-init \
bound_service_account_names=<Consul server service account> \
bound_service_account_namespaces=<Consul installation namespace> \
policies=boostrap-token-policy \
policies=bootstrap-token-policy \
ttl=1h
```
@ -78,7 +78,7 @@ $ helm template --release-name ${RELEASE_NAME} -s templates/server-acl-init-serv
### Configure the Vault Kubernetes auth role in the Consul on Kubernetes helm chart
Now that you have configured Vault, you can configure the Consul Helm chart to
use the ACL bootrap token in Vault:
use the ACL bootstrap token in Vault:
<CodeBlockConfig filename="values.yaml">

View File

@ -26,7 +26,7 @@ Generally, for each secret you wish to store in Vault, the process to integrate
### Example - Gossip Encryption Key Integration
Following the general integraiton steps, a more detailed workflow for integration of the [Gossip encryption key](/docs/k8s/installation/vault/data-integration/gossip) with the Vault Secrets backend would like the following:
Following the general integration steps, a more detailed workflow for integration of the [Gossip encryption key](/docs/k8s/installation/vault/data-integration/gossip) with the Vault Secrets backend would like the following:
#### One time setup in Vault
@ -134,7 +134,7 @@ For example, if your Consul on Kubernetes servers need access to [Gossip encrypt
$ vault policy write license-policy license-policy.hcl
```
1. Create one role that maps the Consul on Kubernetes servicea account to the 3 policies.
1. Create one role that maps the Consul on Kubernetes service account to the 3 policies.
```shell-session
$ vault write auth/kubernetes/role/consul-server \
bound_service_account_names=<Consul server service account> \
@ -144,7 +144,7 @@ For example, if your Consul on Kubernetes servers need access to [Gossip encrypt
```
## Detailed data integration guides
The following secrets can be stored in Vault KV secrets engine, which is meant to handle arbitraty secrets:
The following secrets can be stored in Vault KV secrets engine, which is meant to handle arbitrary secrets:
- [ACL Bootstrap token](/docs/k8s/installation/vault/data-integration/bootstrap-token)
- [ACL Partition token](/docs/k8s/installation/vault/data-integration/partition-token)
- [ACL Replication token](/docs/k8s/installation/vault/data-integration/replication-token)

View File

@ -55,7 +55,7 @@ To use an Vault as the Server TLS Certificate Provider on Kubernetes, we will ne
## One time setup in Vault
### Store the secret in Vault
This step is not valid to this use case because we are not storing a single secret. We are configuring Vault as a provider to mint certificates on an ongaing basis.
This step is not valid to this use case because we are not storing a single secret. We are configuring Vault as a provider to mint certificates on an ongoing basis.
### Create a Vault policy that authorizes the desired level of access to the secret
To use Vault to issue Server TLS certificates, you will need to create the following:

View File

@ -13,7 +13,7 @@ to use Consul Helm chart with Vault as the secrets storage backend.
## Secrets Overview
By default, Consul on Kubernetes leverages Kubernetes secrets which are base64 encoded and unencrypted. In addition, the following limitations exist with mangaging sensitive data within Kubernetes secrets:
By default, Consul on Kubernetes leverages Kubernetes secrets which are base64 encoded and unencrypted. In addition, the following limitations exist with managing sensitive data within Kubernetes secrets:
- There are no lease or time-to-live properties associated with these secrets.
- Kubernetes can only manage resources, such as secrets, within a cluster boundary. If you have sets of clusters, the resources across them need to be managed separately.

View File

@ -65,7 +65,7 @@ Before installing the Vault Injector and configuring the Vault Kubernetes Auth M
```
#### VAULT_AUTH_METHOD_NAME
- **Recommended value:** a concatentation of a `kubernetes-` prefix (to denote the auth method type) with `DATACENTER` environment variable.
- **Recommended value:** a concatenation of a `kubernetes-` prefix (to denote the auth method type) with `DATACENTER` environment variable.
```shell-session
$ export VAULT_AUTH_METHOD_NAME=kubernetes-${DATACENTER}
```
@ -82,11 +82,11 @@ Before installing the Vault Injector and configuring the Vault Kubernetes Auth M
```shell-session
$ export VAULT_SERVER_HOST=$(kubectl get svc vault-dc1 -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
```
- If Vault is not running on Kubernetes, utilize the `api_addr` as defined in the Vault [High Availability Paremeters](https://www.vaultproject.io/docs/configuration#high-availability-parameters) configuration:
- If Vault is not running on Kubernetes, utilize the `api_addr` as defined in the Vault [High Availability Parameters](https://www.vaultproject.io/docs/configuration#high-availability-parameters) configuration:
```shell-session
$ export VAULT_SERVER_HOST=<external IP for vault cluster>
```
#### VAULT_ADDR
- **Recommended value:** Connecting to port 8200 of the Vault server
@ -147,8 +147,8 @@ $ export KUBE_API_URL=$(kubectl config view -o jsonpath="{.clusters[?(@.name ==
Next, you will configure the Vault Kubernetes Auth Method for the datacenter. You will need to provide it with:
- `token_reviewer_jwt` - this a JWT token from the Consul datacenter cluster that the Vault Kubernetes Auth Method will use to query the Consul datacenter Kubernetes API when services in the Consul datacenter request data from Vault.
- `kubernetes_host` - this is the URL of the Consul datacenter's Kubernetes API that Vault will query to authenticate the service account of an incoming request from a Consul data ceneter kubernetes service.
- `kubernetes_ca_cert` - this is the CA certifcation that is currently being used by the Consul datacenter Kubernetes cluster.
- `kubernetes_host` - this is the URL of the Consul datacenter's Kubernetes API that Vault will query to authenticate the service account of an incoming request from a Consul data center kubernetes service.
- `kubernetes_ca_cert` - this is the CA certification that is currently being used by the Consul datacenter Kubernetes cluster.
```shell-session
$ vault write auth/kubernetes/config \
@ -158,7 +158,7 @@ $ vault write auth/kubernetes/config \
```
#### Enable Vault as the Secrets Backend in the Consul datacenter
Finally, you will configure the Consul on Kubernetes helm chart for the datacenter to expect to receive the following values (if you have configured them) to be retreived from Vault:
Finally, you will configure the Consul on Kubernetes helm chart for the datacenter to expect to receive the following values (if you have configured them) to be retrieved from Vault:
- ACL Bootstrap token ([`global.acls.bootstrapToken`](/docs/k8s/helm#v-global-acls-bootstraptoken))
- ACL Partition token ([`global.acls.partitionToken`](/docs/k8s/helm#v-global-acls-partitiontoken))
- ACL Replication token ([`global.acls.replicationToken`](/docs/k8s/helm#v-global-acls-replicationtoken))

View File

@ -38,7 +38,7 @@ In this setup, you will deploy Vault server in the primary datacenter (dc1) Kube
~> **Note**: For demonstration purposes, you will deploy a Vault server in dev mode. For production installations, this is not recommended. Please visit the [Vault Deployment Guide](https://learn.hashicorp.com/tutorials/vault/raft-deployment-guide?in=vault/day-one-raft) for guidance on how to install Vault in a production setting.
1. Change your currrent Kubernetes context to target the primary datacenter (dc1).
1. Change your current Kubernetes context to target the primary datacenter (dc1).
```shell-session
$ kubectl config use-context <context for dc1>
@ -113,7 +113,7 @@ In this setup, you will deploy Vault server in the primary datacenter (dc1) Kube
$ export VAULT_ADDR=http://${VAULT_SERVER_HOST}:8200
```
## Systems Integation
## Systems Integration
### Overview
To use Vault as the Service Mesh Certificate Provider in Kubernetes, you must complete following systems integration actions:
@ -233,7 +233,7 @@ To use Vault as the Service Mesh Certificate Provider in Kubernetes, you must co
$ vault auth enable -path=kubernetes-dc2 kubernetes
```
1. Create a service account with access to the Kubenetes API in the secondary datacenter (dc2). For the secondary datacenter (dc2) auth method, you first need to create a service account that allows the Vault server in the primary datacenter (dc1) cluster to talk to the Kubernetes API in the secondary datacenter (dc2) cluster.
1. Create a service account with access to the Kubernetes API in the secondary datacenter (dc2). For the secondary datacenter (dc2) auth method, you first need to create a service account that allows the Vault server in the primary datacenter (dc1) cluster to talk to the Kubernetes API in the secondary datacenter (dc2) cluster.
```shell-session
$ cat <<EOF >> auth-method-serviceaccount.yaml

View File

@ -201,7 +201,7 @@ that can be used.
If you are using Consul's service mesh features, as opposed to the [service sync](/docs/k8s/service-sync)
functionality, you must be aware of the behavior of the service mesh during upgrades.
Consul clients operate as a daemonset across all Kubernernetes nodes. During an upgrade,
Consul clients operate as a daemonset across all Kubernetes nodes. During an upgrade,
if the Consul client daemonset has changed, the client pods will need to be restarted
because their spec has changed.

View File

@ -61,7 +61,7 @@ tls {
The `consul` block is used to configure CTS connection with a Consul agent to perform queries to the Consul Catalog and Consul KV pertaining to task execution.
-> **Note:** Use HTTP/2 to improve Consul-Terraform-Sync performance when communicating with the local Consul process. [TLS/HTTPS](/docs/agent/config/config-files) must be configured for the local Consul with the [cert_file](/docs/agent/config/config-filess#cert_file) and [key_file](/docs/agent/config/config-files#key_file) parameters set. For the Consul-Terraform-Sync configuration, set `tls.enabled = true` and set the `address` parameter to the HTTPS URL, e.g., `address = example.consul.com:8501`. If using self-signed certificates for Consul, you will also need to set `tls.verify = false` or add the certificate to `ca_cert` or `ca_path`.
-> **Note:** Use HTTP/2 to improve Consul-Terraform-Sync performance when communicating with the local Consul process. [TLS/HTTPS](/docs/agent/config/config-files) must be configured for the local Consul with the [cert_file](/docs/agent/config/config-files#cert_file) and [key_file](/docs/agent/config/config-files#key_file) parameters set. For the Consul-Terraform-Sync configuration, set `tls.enabled = true` and set the `address` parameter to the HTTPS URL, e.g., `address = example.consul.com:8501`. If using self-signed certificates for Consul, you will also need to set `tls.verify = false` or add the certificate to `ca_cert` or `ca_path`.
To read more on suggestions for configuring the Consul agent, see [run an agent](/docs/nia/installation/requirements#run-an-agent).

View File

@ -50,7 +50,7 @@ $ docker run --rm hashicorp/consul-terraform-sync -version
```
</Tab>
<Tab heading="Homewbrew on OS X">
<Tab heading="Homebrew on OS X">
The CTS OSS binary is available in the HashiCorp tap, which is a repository of all our Homebrew packages.

View File

@ -9,7 +9,7 @@ description: >-
## Release Highlights
- **Admin Partitions (Enterprise):** Consul 1.11.0 Enteprise introduces a new entity for defining administrative and networking boundaries within a Consul deployment. This feature also enables servers to communicate with clients over a specific gossip segment created for each partition. This release also enables cross partition communication between services across partitions, using Mesh Gateways. For more information refer to the [Admin Partitions](/docs/enterprise/admin-partitions) documentation.
- **Admin Partitions (Enterprise):** Consul 1.11.0 Enterprise introduces a new entity for defining administrative and networking boundaries within a Consul deployment. This feature also enables servers to communicate with clients over a specific gossip segment created for each partition. This release also enables cross partition communication between services across partitions, using Mesh Gateways. For more information refer to the [Admin Partitions](/docs/enterprise/admin-partitions) documentation.
- **Virtual IPs for services deployed with Consul Service Mesh:** Consul will now generate a unique virtual IP for each service deployed within Consul Service Mesh, allowing transparent proxy to route to services within a data center that exist in different clusters or outside the service mesh.

View File

@ -79,7 +79,7 @@ The following syntax describes how to include a resource label in the rule:
</CodeBlockConfig>
</CodeTabs>
Labels provide operators with more granular control over access to the resouce, but the following resource types do not take a label:
Labels provide operators with more granular control over access to the resource, but the following resource types do not take a label:
- `acl`
- `keyring`
@ -344,7 +344,7 @@ $ curl \
}' http://127.0.0.1:8500/v1/acl/policy?token=<management token>
```
The policy configuration is returned when the call is succesfully performed:
The policy configuration is returned when the call is successfully performed:
```json
{

View File

@ -11,7 +11,7 @@ description: >-
A role is a collection of policies that your ACL administrator can link to a token.
They enable you to reuse policies by decoupling the policies from the token distributed to team members.
Instead, the token is linked to the role, which is able to hold several policies that can be updated asynchronously without distributing new tokens to users.
As a result, roles can provide a more convenient authentication infrastrcture than creating unique policies and tokens for each requester.
As a result, roles can provide a more convenient authentication infrastructure than creating unique policies and tokens for each requester.
## Workflow Overview
@ -19,7 +19,7 @@ Roles are configurations linking several policies to a token. The following proc
1. Assemble rules into policies (see [Policies](/docs/security/acl/acl-policies)) and register them in Consul.
1. Define a role and include the policy IDs or names.
1. Register the role in Consule and link it to a token.
1. Register the role in Consul and link it to a token.
1. Distribute the tokens to users for implementation.
## Creating Roles
@ -73,7 +73,7 @@ Roles may contain the following attributes:
You can specify a service identity when configuring roles or linking tokens to policies. Service identities enable you to quickly construct policies for services, rather than creating identical polices for each service.
Service identities are used during the authorization process to automatically generate a policy for the service(s) specifed. The policy will be linked to the role or token so that the service(s) can _be discovered_ and _discover other healthy service instances_ in a service mesh. Refer to the [service mesh](/docs/connect) topic for additional information about Consul service mesh.
Service identities are used during the authorization process to automatically generate a policy for the service(s) specified. The policy will be linked to the role or token so that the service(s) can _be discovered_ and _discover other healthy service instances_ in a service mesh. Refer to the [service mesh](/docs/connect) topic for additional information about Consul service mesh.
### Service Identity Specification
@ -104,7 +104,7 @@ Use the following syntax to define a service identity:
- `ServiceIdentities`: Declares a service identity block.
- `ServiceIdentities.ServiceName`: String value that specifies the name of the service you want to associate with the policy.
- `ServiceIdentitites.Datacenters`: Array that specifies the names of datacenters in which the service identity applies. This field is optional.
- `ServiceIdentities.Datacenters`: Array that specifies the names of datacenters in which the service identity applies. This field is optional.
Refer to the the [API documentation for roles](/api-docs/acl/roles#sample-payload) for additional information and examples.
@ -217,7 +217,7 @@ node_prefix "" {
</CodeBlockConfig>
Per the `ServiceIdentitites.Datacenters` configuration, the `db` policy is scoped to resources in the `dc1` datacenter.
Per the `ServiceIdentities.Datacenters` configuration, the `db` policy is scoped to resources in the `dc1` datacenter.
<CodeBlockConfig filename="db-policy.hcl">
@ -247,7 +247,7 @@ node_prefix "" {
You can specify a node identity when configuring roles or linking tokens to policies. _Node_ commonly refers to a Consul agent, but a node can also be a physical server, cloud instance, virtual machine, or container.
Node identities enable you to quickly construct policies for nodes, rather than manually creating identical polices for each node. They are used during the authorization process to automatically generate a policy for the node(s) specifed. You can specify the token linked to the policy in the [`acl_tokens_agent`](/docs/agent/options#acl_tokens_agent) field when configuring the agent.
Node identities enable you to quickly construct policies for nodes, rather than manually creating identical polices for each node. They are used during the authorization process to automatically generate a policy for the node(s) specified. You can specify the token linked to the policy in the [`acl_tokens_agent`](/docs/agent/options#acl_tokens_agent) field when configuring the agent.
### Node Identity Specification
@ -278,7 +278,7 @@ NodeIdentities = {
- `NodeIdentities`: Declares a node identity block.
- `NodeIdentities.NodeName`: String value that specifies the name of the node you want to associate with the policy.
- `NodeIdentitites.Datacenters`: Array that specifies the names of datacenters in which the node identity applies. This field is optional.
- `NodeIdentities.Datacenters`: Array that specifies the names of datacenters in which the node identity applies. This field is optional.
Refer to the the [API documentation for roles](/api-docs/acl/roles#sample-payload) for additional information and examples.

View File

@ -155,7 +155,7 @@ Refer to the [API](/api-docs/acl/token) or [command line](/commands/acl/token) d
## Special-purpose Tokens
Your ACL admininstrator can configure several tokens that enable specific functions, such as bootstrapping the ACL
Your ACL administrator can configure several tokens that enable specific functions, such as bootstrapping the ACL
system or accessing Consul under specific conditions. The following table describes the special-purpose tokens:
| Token | Servers | Clients | Description |
@ -208,7 +208,7 @@ In Consul 1.4 - 1.10, the following special tokens were known by different names
## Built-in Tokens
Consul includes a built-in anonymous token and intial management token. Both tokens are injected during when you bootstrap the cluster.
Consul includes a built-in anonymous token and initial management token. Both tokens are injected during when you bootstrap the cluster.
### Anonymous Token

View File

@ -65,7 +65,7 @@ parameters for an auth method of type `aws-iam`:
be referenced in binding rules using `entity_tags.<tag>`. For example, if `IAMEntityTags` contains
`service-name` and if a `service-name` tag exists on the IAM role or user, then you can reference
the tag value using `entity_tags.service-name` in binding rules. If the tag is not present on the
IAM role or user, then `entity_tags.service-name` evalutes to the empty string in binding rules.
IAM role or user, then `entity_tags.service-name` evaluates to the empty string in binding rules.
- `ServerIDHeaderValue` `(string: "")` - The value to require in the `X-Consul-IAM-ServerID` header
in login requests. If set, clients must include the `X-Consul-IAM-ServerID` header in the AWS API
requests used to login to the auth method, and the client-provided value for the header must match
@ -199,7 +199,7 @@ Method](/api-docs/acl#login-to-auth-method) API request, using the following ste
identity in the response. This is a strong guarantee of the client's identity.
- Optionally, the auth method sends the `iam:GetRole` or `iam:GetUser` request to AWS,
if `EnableIAMEntityDetails=true` in the auth method configuration. This request is pre-signed
by the client, so no other credentials or permisssions are required to make the request. Only the
by the client, so no other credentials or permissions are required to make the request. Only the
client needs the `iam:GetRole` or `iam:GetUser` permission. AWS validates the client's signature
when it receives the request. If the signature is valid, AWS returns the IAM role or user details.
This response is not a guarantee of the client's identity - any role or user name may have been

View File

@ -63,13 +63,13 @@ Refer to the following topics for details about policies:
A role is a collection of policies that your ACL administrator can link to a token.
They enable you to reuse policies by decoupling the policies from the token distributed to team members.
Instead, the token is linked to the role, which is able to hold several policies that can be updated asynchronously without distributing new tokens to users.
As a result, roles can provide a more convenient authentication infrastrcture than creating unique policies and tokens for each requester.
As a result, roles can provide a more convenient authentication infrastructure than creating unique policies and tokens for each requester.
Refer to the [Roles](/docs/security/acl/acl-roles) topic for additional information.
## Service Identities
Service identities are configuration blocks that you can add to role configurations or specify when linking tokens to policies. The are used during the authorization process to automatically generate a policy for the service(s) specifed. The policy will be linked to the role or token so that the service(s) can _be discovered_ and _discover other healthy service instances_ in a service mesh.
Service identities are configuration blocks that you can add to role configurations or specify when linking tokens to policies. The are used during the authorization process to automatically generate a policy for the service(s) specified. The policy will be linked to the role or token so that the service(s) can _be discovered_ and _discover other healthy service instances_ in a service mesh.
Service identities enable you to quickly construct policies for services, rather than creating identical polices for each service.
@ -80,7 +80,7 @@ Refer to the following topics for additional information about service identitie
## Node Identities
Node identities are configuration blocks that you can add to role configurations or specify when linking tokens to policies. The are used during the authorization process to automatically generate a policy for the node(s) specifed. You can specify the token linked to the policy in the [`acl_tokens_agent`](/docs/agent/options#acl_tokens_agent) field when configuring the agent.
Node identities are configuration blocks that you can add to role configurations or specify when linking tokens to policies. The are used during the authorization process to automatically generate a policy for the node(s) specified. You can specify the token linked to the policy in the [`acl_tokens_agent`](/docs/agent/options#acl_tokens_agent) field when configuring the agent.
Node identities enable you to quickly construct policies for nodes, rather than creating identical polices for each service.