From 33c96c4ac5af43aad6d15e9f960903e06a0cf5e6 Mon Sep 17 00:00:00 2001 From: Paul Glass Date: Wed, 3 Nov 2021 17:20:01 -0500 Subject: [PATCH] docs: ECS graceful shutdown docs for GA --- website/content/docs/ecs/architecture.mdx | 136 ++++++++++++++++++++++ website/public/img/ecs-task-shutdown.svg | 4 + 2 files changed, 140 insertions(+) create mode 100644 website/public/img/ecs-task-shutdown.svg diff --git a/website/content/docs/ecs/architecture.mdx b/website/content/docs/ecs/architecture.mdx index 2dcfcbf689..106bd1307c 100644 --- a/website/content/docs/ecs/architecture.mdx +++ b/website/content/docs/ecs/architecture.mdx @@ -39,6 +39,142 @@ This diagram shows the timeline of a task starting up and all its containers: - **T1:** The `sidecar-proxy` container starts. It runs Envoy by executing `envoy -c `. - **T2:** The `sidecar-proxy` container is marked as healthy by ECS. It uses a health check that detects if its public listener port is open. At this time, your application containers are started since all Consul machinery is ready to service requests. The only running containers are `consul-client`, `sidecar-proxy`, and your application container(s). +### Task Shutdown + +Graceful shutdown is supported when deploying Consul on ECS, which means the following: + +* No incoming traffic from the mesh is directed to this task during shutdown. +* Outgoing traffic to the mesh is possible during shutdown. + +This diagram shows an example timeline of a task shutting down: + +Task Shutdown Timeline + +- **T0**: ECS sends a TERM signal to all containers. + - `consul-client` begins to gracefully leave the Consul cluster, since it is configured with `leave_on_terminate = true` + - `health-sync` stops syncing health status from ECS into Consul checks. + - `sidecar-proxy` ignores the TERM signal and continues running until it notices that `user-app` container has exited. This allows the application container to continue to make outgoing requests through the proxy to the mesh. This possible due to an entrypoint override for the container, `consul-ecs envoy-entrypoint`. + - `user-app` exits since, in this example, it does not intercept the TERM signal +- **T1**: + - `health-sync` updates its Consul checks to critical status, and then exits. This ensures this service instance is marked unhealthy. + - The `sidecar-proxy` container checks the ECS task metadata. It notices the `user-app` container has stopped, and exits. +- **T2**: + - `consul-client` finishes leaving the Consul cluster and exits + - Updates about this task have reached the rest of the Consul cluster, which means downstream proxies are updated to stop sending traffic to this task. +- **T3**: All containers have exited + - `consul-client` finishes gracefully leaving the Consul datacenter and exits. + - ECS notices all containers have exited, and will soon put change the Task status to `STOPPED` +- **T4**: (Not applicable to this example, but if any conatiners are still running at this point, ECS forcefully stops them by sending a KILL signal) + +#### Task Shutdown: Completely Avoiding Application Errors + +Because Consul service mesh is a distributed, eventually consistent system that is subject to network latency, it is hard to achieve a perfect graceful shutdown. + +In particular, you may have noticed the following issue in example above, where it is possible that an application that has exited still receives incoming traffic: + +* The `user-app` container exits in **T0** +* Afterwards in **T2**, downstream services are updated to no longer send traffic to this task + +As a result, downstream applications will see errors when requests are directed to this instance. This can occur for a short period (seconds or less) at the beginning of task shutdown, until the rest of the Consul cluster knows to avoid sending traffic to this instance. + +Here are a couple of approaches to address this issue: + +1. Modify your application container continue running for a short period of time into task shutdown. By doing this, the application is running to respond to incoming requests successfully at the beginning of task shutdown. This allows time for the Consul cluster to update downstream proxies to stop sending traffic to this task. + + One way to accomplish this with an entrypoint override for your application container which ignores the TERM signal sent by ECS. Here is an example shell script: + + ```bash + # Run the provided command in a background subprocess. + $0 "$@" & + export PID=$! + + onterm() { + echo "Caught sigterm. Sleeping 10s..." + sleep 10 + exit 0 + } + + onexit() { + if [ -n "$PID" ]; then + kill $PID + wait $PID + fi + } + + trap onterm TERM + trap onexit EXIT + wait $PID + ``` + + This script runs the application in a subprocess. It uses a `trap` to intercept the TERM signal. It then sleeps for ten seconds before exiting normally. This allows the application process to continue running after receiving the TERM signal. + + If this script is saved as `./app-entrypoint.sh`, then you can use it for your ECS tasks using the `mesh-task` Terraform module: + + ```hcl + module "my_task" { + source = "hashicorp/consul-ecs/aws//modules/mesh-task" + version = "" + container_definitions = [ + { + name = "my-app" + image = "..." + entryPoint = ["/bin/sh", "-c", file("./app-entrypoint.sh")] + command = ["python", "manage.py", "runserver", "127.0.0.1:8080"] + ... + } + ... + } + ``` + + This example sets the `entryPoint` for the container, which overrides the default entrypoint from the image. When the container starts in ECS, the `command` list is passed as arguments to the `entryPoint` command. Putting this together, the container would start with the command, `/bin/sh -c "" python manage.py runserver 127.0.0.1:8080`. + +2. If the traffic is HTTP(S), you can enable retry logic through Consul Connect [Service Router](/docs/connect/config-entries/service-router). This will configure proxies retry when receiving an error. When Envoy receives a failed request an upstream service, it can retry the request to a different instance of that service that may be able to respond successfully. + + To enable retries through Service Router for a service named `example`, first ensure the configured protocol to `http`: + + ```hcl + Kind = "service-defaults" + Name = "example" + Protocol = "http" + ``` + + The apply the config entry: + + ```shell-session + $ consul config write example-defaults.hcl + ``` + + The add retry settings for the service: + + ```hcl + Kind = "service-router" + Name = "example" + Routes = [ + { + Match { + HTTP { + PathPrefix = "/" + } + } + Destination { + NumRetries = 5 + RetryOnConnectFailure = true + RetryOnStatusCodes = [503] + } + } + ] + ``` + + To apply this, run the following: + + ```shell-session + $ consul config write example-router.hcl + ``` + + This Service Router configuration sets the `PathPrefix = "/"` which will match all requests to the `example` service. It sets the `NumRetries`, `RetryOnConnectFailure`, and `RetryOnStatusCodes = [503]` fields, so that incoming requests are retried. We've seen Envoy return a 503 when its application container has exited, but it is possible there could be other error codes dependening on your environment. + + See the Consul Connect [Configuration Entries](/docs/connect/config-entries/index) documentation for more detail. + ### Automatic ACL Token Provisioning Consul ACL tokens secure communication between agents and services. diff --git a/website/public/img/ecs-task-shutdown.svg b/website/public/img/ecs-task-shutdown.svg new file mode 100644 index 0000000000..225fb156c9 --- /dev/null +++ b/website/public/img/ecs-task-shutdown.svg @@ -0,0 +1,4 @@ + + + +
consul-client
consul-client
health-sync
health-sync
sidecar-proxy
sidecar-proxy
user-app
user-app
T0
(TERM)
T0...
T1
T1
T2
T2
T3
T3
T4
(KILL)
T4...
Viewer does not support full SVG 1.1
\ No newline at end of file