Fix missing docs for k8s

This commit is contained in:
Luke Kysow 2021-01-27 12:07:40 -08:00
parent 972e3add8e
commit b9df5d55b1
12 changed files with 390 additions and 22 deletions

View File

@ -849,6 +849,11 @@ spec:
name: 'name', name: 'name',
description: 'Set to the name of the service being configured.', description: 'Set to the name of the service being configured.',
}, },
{
name: 'namespace',
description:
'If running Consul Open Source, the namespace is ignored (see [Kubernetes Namespaces in Consul OSS](/docs/k8s/crds#consul-oss)). If running Consul Enterprise see [Kubernetes Namespaces in Consul Enterprise](/docs/k8s/crds#consul-enterprise) for more details.',
},
], ],
hcl: false, hcl: false,
}, },

View File

@ -189,7 +189,17 @@ spec:
}, },
{ {
name: 'metadata', name: 'metadata',
children: [{ name: 'name', description: 'Must be set to `global`' }], children: [
{
name: 'name',
description: 'Must be set to `global`',
},
{
name: 'namespace',
description:
'If running Consul Open Source, the namespace is ignored (see [Kubernetes Namespaces in Consul OSS](/docs/k8s/crds#consul-oss)). If running Consul Enterprise see [Kubernetes Namespaces in Consul Enterprise](/docs/k8s/crds#consul-enterprise) for more details.',
},
],
hcl: false, hcl: false,
}, },
{ {

View File

@ -91,6 +91,11 @@ spec:
name: 'name', name: 'name',
description: 'Set to the name of the service being configured.', description: 'Set to the name of the service being configured.',
}, },
{
name: 'namespace',
description:
'If running Consul Open Source, the namespace is ignored (see [Kubernetes Namespaces in Consul OSS](/docs/k8s/crds#consul-oss)). If running Consul Enterprise see [Kubernetes Namespaces in Consul Enterprise](/docs/k8s/crds#consul-enterprise) for more details.',
},
], ],
hcl: false, hcl: false,
}, },

View File

@ -108,7 +108,7 @@ spec:
</Tab> </Tab>
</Tabs> </Tabs>
## gRPC ### gRPC
<Tabs> <Tabs>
<Tab heading="HCL"> <Tab heading="HCL">
@ -196,7 +196,7 @@ spec:
</Tab> </Tab>
</Tabs> </Tabs>
## L4 and L7 ### L4 and L7
<Tabs> <Tabs>
<Tab heading="HCL"> <Tab heading="HCL">
@ -306,7 +306,12 @@ spec:
{ {
name: 'name', name: 'name',
description: description:
'Unlike other config entries, the `metadata.name` field is not used to set the name of the service being configured. Instead, that is set in `spec.destination.name`. Thus this name can be set to anything.', 'Unlike other config entries, the `metadata.name` field is not used to set the name of the service being configured. Instead, that is set in `spec.destination.name`. Thus this name can be set to anything. See [ServiceIntentions Special Case (OSS)](/docs/k8s/crds#serviceintentions-special-case) or [ServiceIntentions Special Case (Enterprise)](/docs/k8s/crds#serviceintentions-special-case-enterprise) for more details.',
},
{
name: 'namespace',
description:
'If running Consul Open Source, the namespace is ignored (see [Kubernetes Namespaces in Consul OSS](/docs/k8s/crds#consul-oss)). If running Consul Enterprise see [Kubernetes Namespaces in Consul Enterprise](/docs/k8s/crds#consul-enterprise) for more details.',
}, },
], ],
hcl: false, hcl: false,
@ -326,9 +331,9 @@ spec:
name: 'namespace', name: 'namespace',
hcl: false, hcl: false,
enterprise: true, enterprise: true,
type: 'string: <required>', type: 'string: <optional>',
description: description:
"Specifies the namespaces the config entry will apply to. This may be set to the wildcard character (`*`) to match all services in all namespaces that don't otherwise have intentions defined.", "Specifies the namespaces the config entry will apply to. This may be set to the wildcard character (`*`) to match all services in all namespaces that don't otherwise have intentions defined. If not set, the namespace used will depend on the `connectInject.consulNamespaces` configuration. See [ServiceIntentions Special Case (Enterprise)](/docs/k8s/crds#serviceintentions-special-case-enterprise) for more details.",
}, },
], ],
}, },

View File

@ -228,6 +228,11 @@ spec:
name: 'name', name: 'name',
description: 'Set to the name of the service being configured.', description: 'Set to the name of the service being configured.',
}, },
{
name: 'namespace',
description:
'If running Consul Open Source, the namespace is ignored (see [Kubernetes Namespaces in Consul OSS](/docs/k8s/crds#consul-oss)). If running Consul Enterprise see [Kubernetes Namespaces in Consul Enterprise](/docs/k8s/crds#consul-enterprise) for more details.',
},
], ],
hcl: false, hcl: false,
}, },

View File

@ -263,6 +263,11 @@ spec:
name: 'name', name: 'name',
description: 'Set to the name of the service being configured.', description: 'Set to the name of the service being configured.',
}, },
{
name: 'namespace',
description:
'If running Consul Open Source, the namespace is ignored (see [Kubernetes Namespaces in Consul OSS](/docs/k8s/crds#consul-oss)). If running Consul Enterprise see [Kubernetes Namespaces in Consul Enterprise](/docs/k8s/crds#consul-enterprise) for more details.',
},
], ],
hcl: false, hcl: false,
}, },

View File

@ -171,6 +171,11 @@ spec:
name: 'name', name: 'name',
description: 'Set to the name of the service being configured.', description: 'Set to the name of the service being configured.',
}, },
{
name: 'namespace',
description:
'If running Consul Open Source, the namespace is ignored (see [Kubernetes Namespaces in Consul OSS](/docs/k8s/crds#consul-oss)). If running Consul Enterprise see [Kubernetes Namespaces in Consul Enterprise](/docs/k8s/crds#consul-enterprise) for more details.',
},
], ],
hcl: false, hcl: false,
}, },

View File

@ -640,6 +640,11 @@ and configure default certificates for mutual TLS. Also override the SNI and CA
name: 'name', name: 'name',
description: 'Set to the name of the gateway being configured.', description: 'Set to the name of the gateway being configured.',
}, },
{
name: 'namespace',
description:
'If running Consul Open Source, the namespace is ignored (see [Kubernetes Namespaces in Consul OSS](/docs/k8s/crds#consul-oss)). If running Consul Enterprise see [Kubernetes Namespaces in Consul Enterprise](/docs/k8s/crds#consul-enterprise) for more details.',
},
], ],
hcl: false, hcl: false,
}, },

View File

@ -173,3 +173,211 @@ to be deleted since that would put Consul into a broken state.
In order to delete the `ServiceDefaults` config, you would need to first delete In order to delete the `ServiceDefaults` config, you would need to first delete
the `ServiceSplitter`. the `ServiceSplitter`.
## Kubernetes Namespaces
### Consul OSS
Consul Open Source (Consul OSS) ignores Kubernetes namespaces and registers all services into the same
global Consul registry based on their names. For example, service `web` in Kubernetes namespace
`web-ns` and service `admin` in Kubernetes namespace `admin-ns` will be registered into
Consul as `web` and `admin` with the Kubernetes source namespace ignored.
When creating custom resources to configure these services, the namespace of the
custom resource is also ignored. For example, you can create a `ServiceDefaults`
custom resource for service `web` in the Kubernetes namespace `admin-ns` even though
the `web` service is actually running in the `web-ns` namespace (although this is not recommended):
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceDefaults
metadata:
name: web
namespace: admin-ns
spec:
protocol: http
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
namespace: web-ns
spec: ...
```
~> **NOTE:** If two custom resources of the same kind **and** the same name are attempted to
be created in different Kubernetes namespaces, the last one created will not be synced.
#### ServiceIntentions Special Case
`ServiceIntentions` are different from the other custom resources because the
name of the resource doesn't matter. For other resources, the name of the resource
determines which service it configures. For example, this resource configures
the service `web`:
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceDefaults
metadata:
name: web
spec:
protocol: http
```
For `ServiceIntentions`, because we need to support the ability to create
wildcard intentions (e.g. `foo => * (allow)` meaning that `foo` can talk to **any** service),
and because `*` is not a valid Kubernetes resource name, we instead use the field `spec.destination.name`
to configure the destination service for the intention:
```yaml
# foo => * (allow)
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceIntentions
metadata:
name: name-does-not-matter
spec:
destination:
name: '*'
sources:
- name: foo
action: allow
---
# foo => web (allow)
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceIntentions
metadata:
name: name-does-not-matter
spec:
destination:
name: web
sources:
- name: foo
action: allow
```
~> **NOTE:** If two `ServiceIntentions` resources set the same `spec.destination.name`, the
last one created will not be synced.
### Consul Enterprise <EnterpriseAlert inline />
Consul Enterprise supports multiple configurations for how Kubernetes namespaces are mapped
to Consul namespaces. The Consul namespace that the custom resource is registered
into depends on the configuration being used but in general, you should create your
custom resources in the same Kubernetes namespace as the service they're configuring and
everything will work as expected.
The details on each configuration are:
1. **Mirroring** - The Kubernetes namespace will be "mirrored" into Consul, i.e.
service `web` in Kubernetes namespace `web-ns` will be registered as service `web`
in the Consul namespace `web-ns`. In the same vein, a `ServiceDefaults` custom resource with
name `web` in Kubernetes namespace `web-ns` will configure that same service.
This is configured via [`connectInject.consulNamespaces`](/docs/k8s/helm#v-connectinject-consulnamespaces):
```yaml
global:
name: consul
enableConsulNamespaces: true
image: hashicorp/consul-enterprise:<tag>-ent
connectInject:
consulNamespaces:
mirroringK8S: true
```
1. **Mirroring with prefix** - The Kubernetes namespace will be "mirrored" into Consul
with a prefix added to the Consul namespace, i.e.
if the prefix is `k8s-` then service `web` in Kubernetes namespace `web-ns` will be registered as service `web`
in the Consul namespace `k8s-web-ns`. In the same vein, a `ServiceDefaults` custom resource with
name `web` in Kubernetes namespace `web-ns` will configure that same service.
This is configured via [`connectInject.consulNamespaces`](/docs/k8s/helm#v-connectinject-consulnamespaces):
```yaml
global:
name: consul
enableConsulNamespaces: true
image: hashicorp/consul-enterprise:<tag>-ent
connectInject:
consulNamespaces:
mirroringK8S: true
mirroringK8SPrefix: k8s-
```
1. **Single destination namespace** - The Kubernetes namespace is ignored and all services
will be registered into the same Consul namespace, i.e. if the destination Consul
namespace is `my-ns` then service `web` in Kubernetes namespace `web-ns` will
be registered as service `web` in Consul namespace `my-ns`.
In this configuration, the Kubernetes namespace of the custom resource is ignored.
For example, a `ServiceDefaults` custom resource with the name `web` in Kubernetes
namespace `admin-ns` will configure the service with name `web` even though that
service is running in Kubernetes namespace `web-ns` because the `ServiceDefaults`
resource ends up registered into the same Consul namespace `my-ns`.
This is configured via [`connectInject.consulNamespaces`](/docs/k8s/helm#v-connectinject-consulnamespaces):
```yaml
global:
name: consul
enableConsulNamespaces: true
image: hashicorp/consul-enterprise:<tag>-ent
connectInject:
consulNamespaces:
consulDestinationNamespace: 'my-ns'
```
~> **NOTE:** In this configuration, if two custom resources of the same kind **and** the same name are attempted to
be created in two Kubernetes namespaces, the last one created will not be synced.
#### ServiceIntentions Special Case (Enterprise)
`ServiceIntentions` are different from the other custom resources because the
name of the resource doesn't matter. For other resources, the name of the resource
determines which service it configures. For example, this resource configures
the service `web`:
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceDefaults
metadata:
name: web
spec:
protocol: http
```
For `ServiceIntentions`, because we need to support the ability to create
wildcard intentions (e.g. `foo => * (allow)` meaning that `foo` can talk to **any** service),
and because `*` is not a valid Kubernetes resource name, we instead use the field `spec.destination.name`
to configure the destination service for the intention:
```yaml
# foo => * (allow)
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceIntentions
metadata:
name: name-does-not-matter
spec:
destination:
name: '*'
sources:
- name: foo
action: allow
---
# foo => web (allow)
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceIntentions
metadata:
name: name-does-not-matter
spec:
destination:
name: web
sources:
- name: foo
action: allow
```
In addition, we support the field `spec.destination.namespace` to configure
the destination service's Consul namespace. If `spec.destination.namespace`
is empty, then the Consul namespace used will be the same as the other
config entries as outlined above.

View File

@ -11,8 +11,21 @@ description: >-
Consul clients running on non-Kubernetes nodes can join a Consul cluster running within Kubernetes. Consul clients running on non-Kubernetes nodes can join a Consul cluster running within Kubernetes.
## Auto-join ## Networking
Within one datacenter, Consul typically requires a fully connected
[network](/docs/architecture). This means the IPs of every client and server
agent should be routable by every other client and server agent in the
datacenter. Clients need to be able to [gossip](/docs/architecture/gossip) with
every other agent and make RPC calls to servers. Servers need to be able to
gossip with every other agent. See [Architecture](/docs/architecture) for more details.
-> **Consul Enterprise customers** may use [network
segments](/docs/enterprise/network-segments) to enable non-fully-connected
topologies. However, out-of-cluster nodes must still be able to communicate
with the server pod or host IP addresses.
## Auto-join
The recommended way to join a cluster running within Kubernetes is to The recommended way to join a cluster running within Kubernetes is to
use the ["k8s" cloud auto-join provider](/docs/agent/cloud-auto-join#kubernetes-k8s). use the ["k8s" cloud auto-join provider](/docs/agent/cloud-auto-join#kubernetes-k8s).
@ -29,20 +42,122 @@ started using the [official Helm chart](/docs/k8s/helm):
```shell-session ```shell-session
$ consul agent -retry-join 'provider=k8s label_selector="app=consul,component=server"' $ consul agent -retry-join 'provider=k8s label_selector="app=consul,component=server"'
``` ```
-> **Note:** This auto-join command only connects on the default gossip port
8301, whether you are joining on the pod network or via host ports. Either a
consul server or client that is already a member of the datacenter should be
listening on this port for the external client agent to be able to use
auto-join.
By default, Consul will join the default gossip port. Pods may set an ### Auto-join on the Pod network
annotation `consul.hashicorp.com/auto-join-port` to an integer value or In the default Consul Helm chart installation, Consul clients and servers are
a named port to specify the port for the auto-join to return. This enables routable only via their pod IPs for server RPCs and gossip (HTTP
different pods to have different exposed ports. API calls to Consul clients can also be made through host IPs). This means any
external client agents joining the Consul cluster running on Kubernetes would
need to be able to have connectivity to those pod IPs.
## Networking In many hosted Kubernetes environments, you will need to explicitly configure
your hosting provider to ensure that pod IPs are routable from external VMs.
See [Azure AKS
CNI](https://docs.microsoft.com/en-us/azure/aks/concepts-network#azure-cni-advanced-networking),
[AWS EKS
CNI](https://docs.aws.amazon.com/eks/latest/userguide/pod-networking.html) and
[GKE VPC-native
clusters](https://cloud.google.com/kubernetes-engine/docs/concepts/alias-ips).
Given you have the [official Helm chart](/docs/k8s/helm) installed with the default values, do the following to join an external client agent.
1. Make sure the pod IPs of the clients and servers in Kubernetes are
routable from the VM and that the VM can access port 8301 (for gossip) and
port 8300 (for server RPC) on those pod IPs.
1. Make sure that the client and server pods running in Kubernetes can route
to the VM's advertise IP on its gossip port (default 8301).
2. Make sure you have the `kubeconfig` file for the Kubernetes cluster in `$HOME/.kube/config` on the external VM.
2. On the external VM, run:
```bash
consul agent \
-advertise="$ADVERTISE_IP" \
-retry-join='provider=k8s label_selector="app=consul,component=server"'
-bind=0.0.0.0 \
-hcl='leave_on_terminate = true' \
-hcl='ports { grpc = 8502 }' \
-config-dir=$CONFIG_DIR \
-datacenter=$DATACENTER \
-data-dir=$DATA_DIR \
```
3. Check if the join was successful by running `consul members`. Sample output:
```shell-session
/ $ consul members
Node Address Status Type Build Protocol DC Segment
consul-consul-server-0 10.138.0.43:9301 alive server 1.9.1 2 dc1 <all>
external-agent 10.138.0.38:8301 alive client 1.9.0 2 dc1 <default>
gke-external-agent-default-pool-32d15192-grs4 10.138.0.43:8301 alive client 1.9.1 2 dc1 <default>
gke-external-agent-default-pool-32d15192-otge 10.138.0.44:8301 alive client 1.9.1 2 dc1 <default>
gke-external-agent-default-pool-32d15192-vo7k 10.138.0.42:8301 alive client 1.9.1 2 dc1 <default>
```
### Auto-join via host ports
If your external VMs can't connect to Kubernetes pod IPs, but they can connect
to the internal host IPs of the nodes in the Kubernetes cluster, you have the
option to expose the clients and server ports on the host IP instead.
1. Install the [official Helm chart](/docs/k8s/helm) with the following values:
```yaml
client:
exposeGossipPorts: true # exposes client gossip ports as hostPorts
server:
exposeGossipAndRPCPorts: true # exposes the server gossip and RPC ports as hostPorts
ports:
# Configures the server gossip port
serflan:
# Note that this needs to be different than 8301, to avoid conflicting with the client gossip hostPort
port: 9301
```
This will expose the client gossip ports, the server gossip ports and the server RPC port at `hostIP:hostPort`. Note that `hostIP` is the **internal** IP of the VM that the client/server pods are deployed on.
1. Make sure the IPs of the Kubernetes nodes are routable from the VM and
that the VM can access ports 8301 and 9301 (for gossip) and port 8300 (for
server RPC) on those node IPs.
1. Make sure the client and server pods running in Kubernetes can route to
the VM's advertise IP on its gossip port (default 8301).
3. Make sure you have the `kubeconfig` file for the Kubernetes cluster in `$HOME/.kube/config` on the external VM.
4. On the external VM, run (note the addition of `host_network=true` in the retry-join argument):
```bash
consul agent \
-advertise="$ADVERTISE_IP" \
-retry-join='provider=k8s host_network=true label_selector="app=consul,component=server"'
-bind=0.0.0.0 \
-hcl='leave_on_terminate = true' \
-hcl='ports { grpc = 8502 }' \
-config-dir=$CONFIG_DIR \
-datacenter=$DATACENTER \
-data-dir=$DATA_DIR \
```
3. Check if the join was successful by running `consul members`. Sample output:
```shell-session
/ $ consul members
Node Address Status Type Build Protocol DC Segment
consul-consul-server-0 10.138.0.43:9301 alive server 1.9.1 2 dc1 <all>
external-agent 10.138.0.38:8301 alive client 1.9.0 2 dc1 <default>
gke-external-agent-default-pool-32d15192-grs4 10.138.0.43:8301 alive client 1.9.1 2 dc1 <default>
gke-external-agent-default-pool-32d15192-otge 10.138.0.44:8301 alive client 1.9.1 2 dc1 <default>
gke-external-agent-default-pool-32d15192-vo7k 10.138.0.42:8301 alive client 1.9.1 2 dc1 <default>
```
## Manual join
If you are unable to use auto-join, you can also follow the instructions in
either of the auto-join sections but instead of using a `provider` key in the
`-retry-join` flag, you would need to pass the address of at least one
consul server, e.g: `-retry-join=$CONSUL_SERVER_IP:$SERVER_SERFLAN_PORT`.
However, rather than hardcoding the IP, it's recommended to set up a DNS entry
that would resolve to the consul servers' pod IPs (if using the pod network) or
host IPs that the server pods are running on (if using host ports).
Consul typically requires a fully connected network.
Because the Consul Helm chart currently doesn't allow exposing servers' gossip ports via a `hostPort`,
nodes outside of Kubernetes joining a cluster running within Kubernetes must be able to communicate
to pod IPs via the network. Note that the auto-join provider discussed above will use pod IPs by default.
-> **Consul Enterprise customers** may use
[network segments](/docs/enterprise/network-segments) to
enable non-fully-connected topologies. However, out-of-cluster nodes must still
be able to communicate with the server pod or host IP addresses.

View File

@ -36,5 +36,5 @@ Consul-Terraform-Sync executes one or more automation tasks with the most recent
- [Contribute](https://github.com/hashicorp/consul-terraform-sync) to the open source project - [Contribute](https://github.com/hashicorp/consul-terraform-sync) to the open source project
- [Report](https://github.com/hashicorp/consul-terraform-sync/issues) bugs or request enhancements - [Report](https://github.com/hashicorp/consul-terraform-sync/issues) bugs or request enhancements
- [Discuss](https://discuss.hashicorp.com/tags/c/consul/29/consul-tf-sync) with the community or ask questions - [Discuss](https://discuss.hashicorp.com/tags/c/consul/29/consul-terraform-sync) with the community or ask questions
- [Build integrations](/docs/nia/installation/requirements#how-to-create-a-compatible-terraform-module) for Consul-Terraform-Sync - [Build integrations](/docs/nia/installation/requirements#how-to-create-a-compatible-terraform-module) for Consul-Terraform-Sync