--- layout: docs page_title: Transparent Proxy | Service Mesh description: >- Learn how transparent proxies enable Consul on Kubernetes to direct inbound and outbound traffic through the service mesh. Use a transparent proxy to increase application security without configuring individual services and intentions. --- # Transparent Proxies in a Service Mesh Transparent proxy allows applications to communicate through the mesh without changing their configuration. Transparent proxy also hardens application security by preventing direct inbound connections that bypass the mesh. #### Without Transparent Proxy ![Diagram demonstrating that without transparent proxy, applications must "opt in" to connecting to their dependencies through the mesh](/img/consul-connect/without-transparent-proxy.png) Without transparent proxy, application owners need to: 1. Explicitly configure upstream services, choosing a local port to access them. 1. Change application to access `localhost:`. 1. Configure application to listen only on the loopback interface to prevent unauthorized traffic from bypassing the mesh. #### With Transparent Proxy ![Diagram demonstrating that with transparent proxy, connections are automatically routed through the mesh](/img/consul-connect/with-transparent-proxy.png) With transparent proxy: 1. Local upstreams are inferred from service intentions and peered upstreams are inferred from imported services, so no explicit configuration is needed. 1. Outbound connections pointing to a Kubernetes DNS record "just work" — network rules redirect them through the proxy. 1. Inbound traffic is forced to go through the proxy to prevent unauthorized direct access to the application. #### Overview Transparent proxy allows users to reach other services in the service mesh while ensuring that inbound and outbound traffic for services in the mesh are directed through the sidecar proxy. Traffic is secured and only reaches intended destinations since the proxy can enforce security and policy like TLS and Service Intentions. Previously, service mesh users would need to explicitly define upstreams for a service as a local listener on the sidecar proxy, and dial the local listener to reach the appropriate upstream. Users would also have to set intentions to allow specific services to talk to one another. Transparent proxying reduces this duplication, by determining upstreams implicitly from Service Intentions and imported services from a peer. Explicit upstreams are still supported in the [proxy service registration](/docs/connect/registration/service-registration) on VMs and via the [annotation](/docs/k8s/annotations-and-labels#consul-hashicorp-com-connect-service-upstreams) in Kubernetes. To support transparent proxying, Consul's CLI now has a command [`consul connect redirect-traffic`](/commands/connect/redirect-traffic) to redirect traffic through an inbound and outbound listener on the sidecar. Consul also watches Service Intentions and imported services then configures the Envoy proxy with the appropriate upstream IPs. If the default ACL policy is "allow", then Service Intentions are not required. In Consul on Kubernetes, the traffic redirection command is automatically set up via an init container. ## Prerequisites ### Kubernetes * To use transparent proxy on Kubernetes, Consul-helm >= `0.32.0` and Consul-k8s >= `0.26.0` are required in addition to Consul >= `1.10.0`. * If the default policy for ACLs is "deny", then Service Intentions should be set up to allow intended services to connect to each other. Otherwise, all Connect services can talk to all other services. * If using Transparent Proxy, all worker nodes within a Kubernetes cluster must have the `ip_tables` kernel module running, e.g. `modprobe ip_tables`. The Kubernetes integration takes care of registering Kubernetes services with Consul, injecting a sidecar proxy, and enabling traffic redirection. ## Upgrading to Transparent Proxy ~> When upgrading from older versions (i.e Consul-k8s < `0.26.0` or Consul-helm < `0.32.0`) to Consul-k8s >= `0.26.0` and Consul-helm >= `0.32.0`, please make sure to follow the upgrade steps [here](/docs/upgrading/upgrade-specific/#transparent-proxy-on-kubernetes). ## Configuration ### Enabling Transparent Proxy Transparent proxy can be enabled in Kubernetes on the whole cluster via the Helm value: ```yaml connectInject: transparentProxy: defaultEnabled: true ``` It can also be enabled on a per namespace basis by setting the label `consul.hashicorp.com/transparent-proxy=true` on the Kubernetes namespace. This will override the Helm value `connectInject.transparentProxy.defaultEnabled` and define the default behavior of Pods in the namespace. For example: ```bash kubectl label namespaces my-app "consul.hashicorp.com/transparent-proxy=true" ``` It can also be enabled on a per service basis via the annotation `consul.hashicorp.com/transparent-proxy=true` on the Pod for each service, which will override both the Helm value and the namespace label: ```yaml apiVersion: v1 kind: Service metadata: name: static-server spec: selector: app: static-server ports: - protocol: TCP port: 80 targetPort: 8080 --- apiVersion: v1 kind: ServiceAccount metadata: name: static-server --- apiVersion: apps/v1 kind: Deployment metadata: name: static-server spec: replicas: 1 selector: matchLabels: app: static-server template: metadata: name: static-server labels: app: static-server annotations: 'consul.hashicorp.com/connect-inject': 'true' 'consul.hashicorp.com/transparent-proxy': 'true' spec: containers: - name: static-server image: hashicorp/http-echo:latest args: - -text="hello world" - -listen=:8080 ports: - containerPort: 8080 name: http serviceAccountName: static-server ``` ### Kubernetes HTTP Health Probes Configuration Traffic redirection interferes with [Kubernetes HTTP health probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/) since the probes expect that kubelet can directly reach the application container on the probe's endpoint, but that traffic will be redirected through the sidecar proxy, causing errors because kubelet itself is not encrypting that traffic using a mesh proxy. For this reason, Consul allows you to [overwrite Kubernetes HTTP health probes](/docs/k8s/connect/health) to point to the proxy instead. This can be done using the Helm value `connectInject.transparentProxy.defaultOverwriteProbes` or the Pod annotation `consul.hashicorp.com/transparent-proxy-overwrite-probes`. ### Traffic Redirection Configuration Pods with transparent proxy enabled will have an init container injected that sets up traffic redirection for all inbound and outbound traffic through the sidecar proxies. This will include all traffic by default, with the ability to configure exceptions on a per-Pod basis. The following Pod annotations allow you to exclude certain traffic from redirection to the sidecar proxies: - [`consul.hashicorp.com/transparent-proxy-exclude-inbound-ports`](/docs/k8s/annotations-and-labels#consul-hashicorp-com-transparent-proxy-exclude-inbound-ports) - [`consul.hashicorp.com/transparent-proxy-exclude-outbound-ports`](/docs/k8s/annotations-and-labels#consul-hashicorp-com-transparent-proxy-exclude-outbound-ports) - [`consul.hashicorp.com/transparent-proxy-exclude-outbound-cidrs`](/docs/k8s/annotations-and-labels#consul-hashicorp-com-transparent-proxy-exclude-outbound-cidrs) - [`consul.hashicorp.com/transparent-proxy-exclude-uids`](/docs/k8s/annotations-and-labels#consul-hashicorp-com-transparent-proxy-exclude-uids) ### Dialing Services Across Kubernetes Clusters - You cannot use transparent proxy in a deployment configuration with [federation between Kubernetes clusters](/docs/k8s/installation/multi-cluster/kubernetes). Instead, services in one Kubernetes cluster must explicitly dial a service to a Consul datacenter in another Kubernetes cluster using the [consul.hashicorp.com/connect-service-upstreams](/docs/k8s/annotations-and-labels#consul-hashicorp-com-connect-service-upstreams) annotation. For example, an annotation of `"consul.hashicorp.com/connect-service-upstreams": "my-service:1234:dc2"` reaches an upstream service called `my-service` in the datacenter `dc2` on port `1234`. - You cannot use transparent proxy in a deployment configuration with a [single Consul datacenter spanning multiple Kubernetes clusters](/docs/k8s/installation/deployment-configurations/single-dc-multi-k8s). Instead, services in one Kubernetes cluster must explicitly dial a service in another Kubernetes cluster using the [consul.hashicorp.com/connect-service-upstreams](/docs/k8s/annotations-and-labels#consul-hashicorp-com-connect-service-upstreams) annotation. For example, an annotation of `"consul.hashicorp.com/connect-service-upstreams": "my-service:1234"`, reaches an upstream service called `my-service` in another Kubernetes cluster and on port `1234`. Although transparent proxy is enabled, Kubernetes DNS is not utilized when communicating between services that exist on separate Kubernetes clusters. - In a deployment configuration with [cluster peering](/docs/connect/cluster-peering), transparent proxy is fully supported and thus dialing services explicitly is not required. ## Known Limitations - Deployment configurations with federation across or a single datacenter spanning multiple clusters must explicitly dial a service in another datacenter or cluster using annotations. - When dialing headless services, the request is proxied using a plain TCP proxy. The upstream's protocol is not considered. ## Using Transparent Proxy In Kubernetes, services can reach other services via their [Kubernetes DNS](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) address or through Pod IPs, and that traffic will be transparently sent through the proxy. Connect services in Kubernetes are required to have a Kubernetes service selecting the Pods. ~> **Note**: In order to use Kubernetes DNS, the Kubernetes service name needs to match the Consul service name. This is the case by default, unless the service Pods have the annotation `consul.hashicorp.com/connect-service` overriding the Consul service name. Transparent proxy is enabled by default in Consul-helm >=`0.32.0`. The Helm value used to enable/disable transparent proxy for all applications in a Kubernetes cluster is `connectInject.transparentProxy.defaultEnabled`. Each Pod for the service will be configured with iptables rules to direct all inbound and outbound traffic through an inbound and outbound listener on the sidecar proxy. The proxy will be configured to know how to route traffic to the appropriate upstream services based on [Service Intentions](/docs/connect/config-entries/service-intentions). This means Connect services no longer need to use the `consul.hashicorp.com/connect-service-upstreams` annotation to configure upstreams explicitly. Once the Service Intentions are set, they can simply address the upstream services using Kubernetes DNS. As of Consul-k8s >= `0.26.0` and Consul-helm >= `0.32.0`, a Kubernetes service that selects application pods is required for Connect applications, i.e: ```yaml apiVersion: v1 kind: Service metadata: name: sample-app namespace: default spec: selector: app: sample-app ports: - protocol: TCP port: 80 ``` In the example above, if another service wants to reach `sample-app` via transparent proxying, it can dial `sample-app.default.svc.cluster.local`, using [Kubernetes DNS](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/). If ACLs with default "deny" policy are enabled, it also needs a [ServiceIntention](/docs/connect/config-entries/service-intentions) allowing it to talk to `sample-app`. ### Headless Services For services that are not addressed using a virtual cluster IP, the upstream service must be configured using the [DialedDirectly](/docs/connect/config-entries/service-defaults#dialeddirectly) option. Individual instance addresses can then be discovered using DNS, and dialed through the transparent proxy. When this mode is enabled on the upstream, connect certificates will be presented for mTLS and intentions will be enforced at the destination. Note that when dialing individual instances HTTP routing rules configured with config entries will **not** be considered. The transparent proxy acts as a TCP proxy to the original destination IP address.