consul/website/content/docs/enterprise/admin-partitions.mdx

229 lines
12 KiB
Plaintext
Raw Normal View History

---
layout: docs
page_title: Consul Enterprise Admin Partitions
description: Consul Enterprise enables you to create paritions that can be administrated across namespaces.
---
# Consul Enterprise Admin Partitions
<EnterpriseAlert>
This feature requires{' '}
<a href="https://www.hashicorp.com/products/consul/">Consul Enterprise</a>{' '}
with the Governance and Policy module.
</EnterpriseAlert>
This topic provides and overview of admin partitions, which are entities that define one or more administrative boundaries for single Consul deployments.
## Introduction
Admin partitions exist a level above namespaces in the identity hierarchy. They contain one or more namespaces and allow multiple independent tenants to share a Consul server cluster. As a result, admin partitions enable you to define administrative and communication boundaries between services managed by separate teams or belonging to separate stakeholders. They can also segment production and non-production services within the Consul deployment.
-> **Preexisting resource nodes and namespaces**: Admin partitions were introduced in Consul 1.11. Resource nodes were not namespaced prior to 1.11. After upgrading to Consul 1.11 or later, all resource nodes will be namespaced.
### Default Admin Partition
Each Consul cluster will have at least one default admin partition (named `default`). Any resource created without specifying an admin partition will inherit the partition of the ACL token.
2021-11-20 00:27:07 +00:00
The `default` admin partition is special in that it may contain namespaces and other entities that are replicated between datacenters. The `default` partition should also contain the Consul servers.
-> **Preexisting resources and the `default` partition**: Admin partitions were introduced in Consul 1.11. After upgrading to Consul 1.11 or later, the `default` partition will contain all resources created in previous versions.
### Naming Admin Partitions
Only characters that are valid in DNS names can be used to name admin partitions.
Names must also start with a letter.
### Namespaces
When an admin partition is created, it will include a `default` namespace. You can create additional namespaces within the partition.
All namespaces must exist within an admin partition. By extension, the partition will also contain all namespaced resources.
Within a single datacenter, a namespace in one admin partition is logically separate from any other namespace with the same name in other admin partitions.
### Cross-datacenter Replication
Only resources in the default admin partition will be replicated to secondary datacenters. Non-default partitions are currently not supported in secondary datacenters.
### DNS Queries
Client agents will be configured to operate within a specific admin partition. The DNS interface will only return results for the admin partition within the scope of the client.
### Service Mesh Configurations
2021-11-17 16:08:52 +00:00
Values specified for [`proxy-defaults`](/docs/connect/config-entries/proxy-defaults) configurations are scoped to a specific partition. Services registered in the partition will use the partition's `proxy-defaults` values.
### Cross-partition Networking
2021-11-17 16:08:52 +00:00
You can configure services to be discoverable and accessible by downstream services in any partition within the datacenter. Specify the upstream services that you want to be available for discovery by configuring the `partition-exports` configuration entry in the partition where the services are registered. Refer to the [`partition-exports` documentation](/docs/connect/config-entries/partition-exports) for details.
2021-11-20 00:27:07 +00:00
Additionally, the `upstreams` configuration for proxies in the source partition must specify the name of the destination partition so that listeners can be created. Refer to the [Upstream Configuration Reference](/docs/connect/registration/service-registration#upstream-configuration-reference) for additional information.
## Requirements
Your Consul configuration must meet the following requirements to use admin partitions.
### Versions
* Consul 1.11.0 and newer
### Security Configurations
* The agent token used by the client agent will need to allow `node:write` in the admin partition.
2021-10-19 23:19:40 +00:00
* The `write` permission for `proxy-defaults` requires `mesh:write`. See [Admin Partition Rules](/docs/security/acl/acl-rules#admin-partition-rules) for additional information.
* The `write` permissions for ingress and terminating gateways require `mesh:write` privileges.
* Wildcards (`*`) are not supported when creating intentions for admin partitions, but you can use a wildcard to specify services within a partition.
* With the exception of the `default` admin partition, ACL rules configured for admin partitions are isolated, so policies defined in partitions outside of the `default` partition can only reference its local partition.
### Agent Configurations
* In client agent configurations, the admin partition name should be specified in the agent configuration:
```hcl
partition = "<NAME>"
```
* The anti-entropy sync will use the configured admin partition name when registering the node.
### Kubernetes Requirements
One of the primary use cases for admin partitions is for enabling a service mesh on Kubernetes clusters. The following requirements must be met to create admin partitions on Kubernetes:
2021-11-20 00:27:07 +00:00
* Two or more Kubernetes clusters. Consul servers must be deployed on one of the clusters. The other clusters should run Consul clients.
* A Consul Enterprise license must be installed on each instance of Consul.
2021-11-20 00:27:07 +00:00
* The helm chart for consul-k8s v0.34.1 or greater.
* Consul 1.11.0-ent-alpha or greater.
2021-11-20 00:27:07 +00:00
* All Consul clients in the VPC must be able to communicate with the Consul servers.
* VPC firewall rules should be implemented that enable clients to communicate with the Consul servers in the `default` partition. The server nodes should be deployed to a single cluster.
## Usage
This section describes how to deploy Consul admin partitions to Kubernetes clusters, as well as directs you to the CLI reference for interacting with the admin partitions API on the command line.
### Deploying Consul with Admin Partitions on Kubernetes
2021-11-20 00:27:07 +00:00
The expected use case is to create admin partitions on Kubernetes clusters. This is because many organizations prefer to use cloud-managed Kubernetes offerings to provision separate Kubernetes clusters for individual teams, business units, or environments. This is opposed to deploying a single, large Kubernetes cluster. When these organizations attempt to use a service mesh to enable cross-cluster activities, such as administration tasks and communication between nodes, they encounter problems.
2021-11-20 00:27:07 +00:00
The following procedure will result in a different admin partition in each Kubernetes cluster. The Consul clients running in the cluster with servers will be in the `default` partition. Another partition called `clients` will also be created.
Verify that your Consul deployment meets the [Kubernetes Requirements](#kubernetes-requirements) before proceeding.
2021-11-20 00:27:07 +00:00
1. Update the firewall rules so that pods containing Consul clients and pods containing Consul servers can send and receive traffic. Refer to your virtual cloud provider's documentation for instructions on how to configure firewall rules.
1. Create the license secret in each cluster, e.g.:
```shell-session
2021-11-20 00:27:07 +00:00
kubectl create secret generic license --from-file=key=[license file path i.e. ./license.hclic]
```
This step must also be completed for each workload cluster.
1. Create a server configuration file to override the default Consul Helm chart settings:
2021-11-20 00:27:07 +00:00
<CodeBlockConfig lineNumbers>
```yaml
global:
2021-11-20 00:27:07 +00:00
enableConsulNamespaces: true
tls:
enabled: true
2021-11-20 00:27:07 +00:00
image: hashicorp/consul-enterprise:1.11.0-ent-beta3
adminPartitions:
enabled: true
server:
exposeGossipAndRPCPorts: true
2021-11-20 00:27:07 +00:00
enterpriseLicense:
secretName: license
secretKey: key
connectInject:
2021-11-20 00:27:07 +00:00
enabled: true
transparentProxy:
defaultEnabled: false
consulNamespaces:
mirroringK8S: true
controller:
2021-11-20 00:27:07 +00:00
enabled: true
```
2021-11-20 00:27:07 +00:00
</CodeBlockConfig>
Note that the `transparentProxy` configuration is disabled. This is to enable multi-cluster networking.
1. Start the Consul server(s) using the custom configuration file:
```shell-session
helm install server hashicorp/consul -f server.yaml
```
1. After the server starts, get the external IP address for partition service so that it can be added to the client configuration. The partition service is a `LoadBalancer` type. The IP address is where clients that across your partitions will communicate with servers in this cluster.
```shell-session
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3m
servers-consul-connect-injector-svc ClusterIP 10.97.175.39 <none> 443/TCP 30s
servers-consul-controller-webhook ClusterIP 10.100.22.99 <none> 443/TCP 30s
servers-consul-dns ClusterIP 10.103.43.20 <none> 53/TCP,53/UDP 30s
servers-consul-partition-service LoadBalancer 10.111.255.152 35.192.119.38 8501:30643/TCP,8301:30466/TCP,8300:30657/TCP 30s
servers-consul-server ClusterIP None <none> 8501/TCP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP,8600/TCP,8600/UDP 30s
servers-consul-ui ClusterIP 10.106.240.55 <none> 443/TCP 30s
1. Create the workload configuration for client nodes in your cluster. Create a configuration for each admin partition. In the following example, the external IP address from the previous step has been applied:
```yaml
global:
enabled: false
enableConsulNamespaces: true
2021-11-20 00:27:07 +00:00
image: hashicorp/consul-enterprise:1.11.0-ent-beta3
adminPartitions:
enabled: true
name: "clients" // partition name
tls:
enabled: true
caCert:
secretName: consul-consul-ca-cert
secretKey: tls.crt
caKey:
secretName: consul-consul-ca-key
secretKey: tls.key
server:
enterpriseLicense:
secretName: license
secretKey: key
externalServers:
enabled: true
hosts: "35.192.119.38" # Insert External IP of LoadBalancer here
tlsServerName: server.dc1.consul
client:
enabled: true
exposeGossipPorts: true
join: "35.192.119.38"
connectInject:
enabled: true
consulNamespaces:
mirroringK8S: true
controller:
enabled: true
```
1. Copy the server certificate to the workload cluster.
```shell-session
kubectl get secret server-consul-ca-cert --context server -o yaml | kubectl apply --context client -f -
```
1. Copy the server key to the workload cluster.
```shell-session
kubectl get secret consul-consul-ca-key --context server -o yaml | kubectl apply --context client -f -
```
1. Start the workload client clusters:
```shell-session
helm install client hashicorp/consul -f client.yaml
```
### CLI Usage
You can use create and manage admin partitions through the CLI. Refer to the [admin partition CLI documentation](/commands/admin-partition) for details.
## Known Limitations
* Gossip between nodes in different admin partitions must be constrained. You can accomplish this with through the use of [network segments](network-segments).
* Cross-partition communication is not currently supported.
* Partitions can only be created in the primary datacenter.