mirror of https://github.com/status-im/consul.git
Apply suggestions from code review
Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com>
This commit is contained in:
parent
03ff1d07ef
commit
cd88085423
|
@ -8,7 +8,7 @@ description: >-
|
|||
|
||||
The `/status` endpoints return status-related information for tasks. This endpoint returns a count of successful and failed task events that are recorded whenever tasks execute an automation. Currently, only the five most recent events are stored in Consul-Terraform-Sync (CTS). For more information on the hierarchy of status information and how it is collected, see [Status Information](/docs/nia/tasks#status-information).
|
||||
|
||||
If you [run CTS with high availability enabled](/docs/nia/usage/run-ha), you can send requests to the `/status` endpoint on CTS leader or follower instances, but requests to a follower instance return a 400 Bad Request and error message. The error message depends on what information the follower instance is able to obtain about the leader. Refer to [Error Messages](/docs/nia/usage/errors-ref) for more information.
|
||||
If you [run CTS with high availability enabled](/docs/nia/usage/run-ha), you can send requests to the `/status` endpoint on CTS leader or follower instances. Requests to a follower instance, however, return a 400 Bad Request and error message. The error message depends on what information the follower instance is able to obtain about the leader. Refer to [Error Messages](/docs/nia/usage/errors-ref) for more information.
|
||||
|
||||
## Overall Status
|
||||
|
||||
|
|
|
@ -9,7 +9,7 @@ description: >-
|
|||
|
||||
The `/tasks` endpoints interact with the tasks that Consul-Terraform-Sync (CTS) is responsible for running.
|
||||
|
||||
If you [run CTS with high availability enabled](/docs/nia/usage/run-ha), you can send requests to the `/tasks` endpoint on CTS leader or follower instances, but requests to a follower instance return a 400 Bad Request and error message. The error message depends on what information the follower instance is able to obtain about the leader. Refer to [Error Messages](/docs/nia/usage/errors-ref) for more information.
|
||||
If you [run CTS with high availability enabled](/docs/nia/usage/run-ha), you can send requests to the `/tasks` endpoint on CTS leader or follower instances. Requests to a follower instance, however, return a 400 Bad Request and error message. The error message depends on what information the follower instance is able to obtain about the leader. Refer to [Error Messages](/docs/nia/usage/errors-ref) for more information.
|
||||
|
||||
## Get Tasks
|
||||
|
||||
|
|
|
@ -15,16 +15,16 @@ The following diagram shows the CTS workflow as it monitors the Consul service c
|
|||
|
||||
[![Consul-Terraform-Sync Architecture](/img/nia-highlevel-diagram.svg)](/img/nia-highlevel-diagram.svg)
|
||||
|
||||
1. CTS monitors the state of Consul’s service catalog and its KV store. This process is described in [Watcher and Views](#watcher-and-views).
|
||||
1. CTS monitors the state of Consul’s service catalog and its KV store. This process is described in [Watcher and views](#watcher-and-views).
|
||||
1. CTS detects a change.
|
||||
1. CTS prompts Terraform to update the state of the infrastructure.
|
||||
|
||||
|
||||
## Watcher and views
|
||||
|
||||
CTS uses Consul’s [blocking queries](/api-docs/features/blocking) functionality to monitor Consul for updates. If an endpoint does not support blocking queries, CTS uses polling to watch for changes. These mechanisms are referred to in CTS as watchers.
|
||||
CTS uses Consul’s [blocking queries](/api-docs/features/blocking) functionality to monitor Consul for updates. If an endpoint does not support blocking queries, CTS uses polling to watch for changes. These mechanisms are referred to in CTS as *watchers*.
|
||||
|
||||
The watcher maintains a separate thread for each value monitored and runs any tasks that depend on the watched value whenever it's updated. These threads are referred to as views. For example, a thread may run a task to update a proxy when the watcher sees that an instance changes to an unhealthy state.
|
||||
The watcher maintains a separate thread for each value monitored and runs any tasks that depend on the watched value whenever it is updated. These threads are referred to as _views_. For example, a thread may run a task to update a proxy when the watcher detects that an instance has become unhealthy .
|
||||
|
||||
## Tasks
|
||||
|
||||
|
@ -51,17 +51,17 @@ The following types of state information are associated with CTS.
|
|||
|
||||
### Terraform state information
|
||||
|
||||
By default, CTS stores [Terraform state data](https://www.terraform.io/docs/state/index.html) in the Consul KV, but you can specify where this information is stored by configuring the `backend` setting in the [Terraform driver configuration](/docs/nia/configuration#backend). If the backend is not configured to a local location, then the data persists if CTS stops.
|
||||
By default, CTS stores [Terraform state data](https://www.terraform.io/docs/state/index.html) in the Consul KV, but you can specify where this information is stored by configuring the `backend` setting in the [Terraform driver configuration](/docs/nia/configuration#backend). The data persists if CTS stops and the backend is configured to a remote location.
|
||||
|
||||
### CTS task and event data
|
||||
|
||||
By default, CTS stores task and event data in the Consul KV. This data is transient and does not persist unless you configure [CTS to run with high availability enabled](/docs/nia/usage/run-ha). High availability is an enterprise feature that promotes CTS resiliency. When high availability is enabled, CTS stores and persists task changes and events that occur when an instance stops.
|
||||
|
||||
The data stored when operating in high availability mode includes task changes made using the task API or CLI, such as creating a new task, deleting a task, or enabling/disabling a task. You can empty the leader’s stored state information by starting CTS with the [`-reset-storage` flag](/docs/nia/cli/start#options).
|
||||
The data stored when operating in high availability mode includes task changes made using the task API or CLI. Examples of task changes include creating a new task, deleting a task, and enabling or disabling a task. You can empty the leader’s stored state information by starting CTS with the [`-reset-storage` flag](/docs/nia/cli/start#options).
|
||||
|
||||
## Instance compatibility checks (high availability)
|
||||
|
||||
If you [run CTS with high availability enabled](/docs/nia/usage/run-ha), CTS performs instance compatibility checks to ensure that all instances in the cluster behave consistently, enabling CTS to properly perform automations configured in the state storage.
|
||||
If you [run CTS with high availability enabled](/docs/nia/usage/run-ha), CTS performs instance compatibility checks to ensure that all instances in the cluster behave consistently. Consistent instance behavior enables CTS to properly perform automations configured in the state storage.
|
||||
|
||||
The only incompatibility CTS checks for are tasks that are configured with a [local module](/docs/nia/configuration#module). CTS instances that do not include this module directory are incompatible. Example log:
|
||||
|
||||
|
@ -72,7 +72,7 @@ Refer to [Error Messages](/docs/nia/usage/errors-ref) for additional information
|
|||
|
||||
CTS instances perform a compatibility check on start-up based on the stored state and every five minutes after starting. If the check detects an incompatible CTS instance, it generates a log so that an operator can address it.
|
||||
|
||||
CTS will continue to run and log the error message if it finds an incompatibility. CTS can still elect an incompatible instance to be the leader, but tasks affected by the incompatibility will not run successfully. This is because all active CTS instances enter [`once-mode`](/docs/nia/cli/start#modes) and run the tasks once when initially elected.
|
||||
CTS logs the error message and continues to run when it finds an incompatibility. CTS can still elect an incompatible instance to be the leader, but tasks affected by the incompatibility do not run successfully. This happens because all active CTS instances enter [`once-mode`](/docs/nia/cli/start#modes) and run the tasks once when initially elected.
|
||||
|
||||
## Security guidelines
|
||||
|
||||
|
|
|
@ -25,7 +25,7 @@ The following table describes all of the available flags.
|
|||
| --- | ---- | ---- | ---- | ---- |
|
||||
| <nobr>`-config-dir`</nobr>| Required when <nobr>`-config-file`</nobr> is not set | string | Specifies a directory containing CTS instance configuration files to load on startup. Files must be in HCL or JSON format. You can specify the flag multiple times to load more than one directory of files. | none |
|
||||
| <nobr>`-config-file`</nobr> | Required when `-config-dir` is not set | string | Specifies the CTS instance configuration file to load on startup. Files must be in HCL or JSON format. You can specify the flag multiple times to load more than one file. | none |
|
||||
| <nobr>`-inspect`</nobr> | Optional | boolean | Starts CTS in inspect mode (refer to [Modes](#modes) for additional information). In inspect mode, CTS displays the proposed state changes for all tasks once and exits. No changes are applied. If an error occurs before displaying all changes, CTS exits with a non-zero status. | `false` |
|
||||
| <nobr>`-inspect`</nobr> | Optional | boolean | Starts CTS in inspect mode . In inspect mode, CTS displays the proposed state changes for all tasks once and exits. No changes are applied. Refer to [Modes](#modes) for additional information. If an error occurs before displaying all changes, CTS exits with a non-zero status. | `false` |
|
||||
| <nobr>`-inspect-task`</nobr> | Optional | string | Starts CTS in inspect mode for the specified task. CTS displays the proposed state changes for the specified task and exits. No changes are applied. <br/>You can specify the flag multiple times to display more than one task. <br/>If an error occurs before displaying all changes, CTS exits with a non-zero status. | none |
|
||||
| <nobr>`-once`</nobr> | Optional | boolean | Starts CTS in once-mode (refer to [Modes](#modes) for additional information). In once-mode, CTS renders templates and runs tasks once. CTS does not start the process in long-running mode and disables buffer periods. | `false` |
|
||||
| <nobr>`-reset-storage`</nobr> | Optional | boolean | <EnterpriseAlert inline /> Directs CTS to overwrite the state storage with new state information when the instance you are starting is elected the cluster leader. <br/>Only use this flag when running CTS in [high availability mode](/docs/nia/usage/run-ha). | false |
|
||||
|
@ -38,6 +38,6 @@ By default, CTS starts in long-running mode. The following table describes all a
|
|||
| Name | Description |How to start |
|
||||
| --- | --- | --- |
|
||||
| <nobr>Long-running mode</nobr> | CTS starts in once-mode and switches to a long-running process. <p>During the once-mode phase, the daemon exits with a non-zero status if it encounters an error.</p><p>After successfully operating in once-mode, CTS begins a long-running process in which it logs errors and exits.</p><p> When the long-running process begins, the CTS daemon serves API and command requests.</p> | No additional flags. <br/>This is the default mode. |
|
||||
| Once mode | In once-mode, CTS renders templates and runs tasks once. CTS does not start the process in long-running mode and disables buffer periods. <p>Use once-mode before starting CTS in long-running mode to verify that your configuration is accurate and tasks update network infrastructure as expected.</p> | Add the `-once` flag when starting CTS. |
|
||||
| Once-mode | In once-mode, CTS renders templates and runs tasks once. CTS does not start the process in long-running mode and disables buffer periods. <p>Use once-mode before starting CTS in long-running mode to verify that your configuration is accurate and tasks update network infrastructure as expected.</p> | Add the `-once` flag when starting CTS. |
|
||||
| Inspect mode | CTS displays the proposed state changes for all tasks once and exits. No changes are applied. If an error occurs before displaying all changes, CTS exits with a non-zero status. <p>Use inspect mode before starting CTS in long-running mode to debug one or more tasks and to verify that your tasks update network infrastructure as expected.</p> | Add the `-inspect` flag to verify all tasks. <p>Add the `-inspect-task` flag to inspect a single task. Use multiple flags to verify more than one task.</p> |
|
||||
| <nobr>High availability mode</nobr> | Ensures that all changes to Consul that occur during a failover transition are processed and that CTS continues to operate as expected. CTS logs the errors and continues to operate without interruption. Refer to [Run Consul-Terraform-Sync with High Availability](/docs/nia/usage/run-ha) for additional information. | Add the `high_availability` block to your CTS instance configuration. <p>Refer to [Run Consul-Terraform-Sync with High Availability](/docs/nia/usage/run-ha) for additional information.</p> |
|
||||
|
|
|
@ -226,22 +226,22 @@ The `cluster` parameter contains configurations for the cluster you want to oper
|
|||
| Parameter | Description| Required | Type |
|
||||
| --------- | ---------- | -------- | ------|
|
||||
| `name` | Specifies the name of the cluster operating with high availability enabled. | Required | String |
|
||||
| `storage` | Configures how CTS stores state information. Refer to [State storage and persistence](/docs/nia/architecture#state-storage-and-persistence) for additional information. You can define storage for the `”consul”` resource. Refer to [High availability cluster storage](#high-availability-cluster-storage) for additional information | Optional | Object |
|
||||
| `storage` | Configures how CTS stores state information. Refer to [State storage and persistence](/docs/nia/architecture#state-storage-and-persistence) for additional information. You can define storage for the `”consul”` resource. Refer to [High availability cluster storage](#high-availability-cluster-storage) for additional information. | Optional | Object |
|
||||
|
||||
#### High availability cluster storage
|
||||
#### `high_availability.cluster.storage`
|
||||
|
||||
The `high_availability.cluster.storage` object contains the following configurations.
|
||||
|
||||
| Parameter | Description| Required | Type |
|
||||
| --------- | ---------- | -------- | ------|
|
||||
| `parent_path` | Defines a parent path in the Consul KV for CTS to store state information. Default is `consul-terraform-sync/`. CTS automatically appends the cluster name to the parent path, so the effective default directory for state information is `consul-terraform-sync/<cluster-name>`. | Optional | String |
|
||||
| `namespace` | specifies the namespace to use when storing state in the Consul KV. Default is inferred from the CTS ACL token. The fallback default is `default`. | Optional | String |
|
||||
| `session_ttl` | Specifies the session time-to-live for leader elections. You must specify a value greater than the `session_ttl_min` configured for Consul. A longer `session_ttl` results in a longer leader election after a failover. Default is `15s`. | Optional | `15s` |
|
||||
| `namespace` | Specifies the namespace to use when storing state in the Consul KV. Default is inferred from the CTS ACL token. The fallback default is `default`. | Optional | String |
|
||||
| `session_ttl` | Specifies the session time-to-live for leader elections. You must specify a value greater than the `session_ttl_min` configured for Consul. A longer `session_ttl` results in a longer leader election after a failover. Default is `15s`. | Optional | String |
|
||||
|
||||
### `high_availbility.instance`
|
||||
|
||||
The `instance` parameter is an object that contains configurations unique to the CTS instance. You specify the following configurations:
|
||||
* `address`: (Optional) String value that specifies the IP address of the CTS instance to advertise to other instances. This parameter does not have a default value.
|
||||
- `address`: (Optional) String value that specifies the IP address of the CTS instance to advertise to other instances. This parameter does not have a default value.
|
||||
|
||||
|
||||
## Service
|
||||
|
@ -272,7 +272,7 @@ service {
|
|||
|
||||
## Task
|
||||
|
||||
A `task` block configures which task to execute in automation. When the task executes can be determined by the `condition` block. You can specify the `task` block multiple times to configure multiple tasks, or you can omit it entirely. If task blocks are not specified in your initial configuration, you can add them to a running CTS instance by using the [`/tasks` API endpoint](/docs/nia/api/tasks#tasks) or the [CLI's `task` command](/docs/nia/cli/task#task).
|
||||
A `task` block configures which task to execute in automation. Use the `condition` block to specify when the task executes. You can specify the `task` block multiple times to configure multiple tasks, or you can omit it entirely. If task blocks are not specified in your initial configuration, you can add them to a running CTS instance by using the [`/tasks` API endpoint](/docs/nia/api/tasks#tasks) or the [CLI's `task` command](/docs/nia/cli/task#task).
|
||||
|
||||
```hcl
|
||||
task {
|
||||
|
|
|
@ -23,9 +23,9 @@ The resolution is to create the missing local module on the incompatible CTS ins
|
|||
|
||||
In the following example response error:
|
||||
|
||||
* CTS can determine the leader.
|
||||
* `high_availability.instance.address` is configured for the leader.
|
||||
* The CTS instance you sent the request to is not the leader.
|
||||
- CTS can determine the leader.
|
||||
- `high_availability.instance.address` is configured for the leader.
|
||||
- The CTS instance you sent the request to is not the leader.
|
||||
|
||||
|
||||
```json
|
||||
|
|
|
@ -9,13 +9,13 @@ description: >-
|
|||
|
||||
The following components are required to run Consul-Terraform-Sync (CTS):
|
||||
|
||||
* A Terraform Provider
|
||||
* A Terraform Module
|
||||
* A Consul cluster running outside of the `consul-terraform-sync` daemon
|
||||
- A Terraform provider
|
||||
- A Terraform module
|
||||
- A Consul cluster running outside of the `consul-terraform-sync` daemon
|
||||
|
||||
You can add support for your network infrastructure through Terraform providers so that you can apply Terraform modules to implement network integrations.
|
||||
|
||||
The following guidance is for running CTS using the Terraform driver. The Terraform Cloud driver<EnterpriseAlert inline />has [additional prerequisites](/docs/nia/network-drivers/terraform-cloud#setting-up-terraform-cloud-driver).
|
||||
The following guidance is for running CTS using the Terraform driver. The Terraform Cloud driver<EnterpriseAlert inline /> has [additional prerequisites](/docs/nia/network-drivers/terraform-cloud#setting-up-terraform-cloud-driver).
|
||||
|
||||
## Run a Consul cluster
|
||||
|
||||
|
@ -33,7 +33,7 @@ For information on compatible Consul versions, refer to the [Consul compatibilit
|
|||
|
||||
The Consul agent must be running in order to dynamically update network devices. Refer to the [Consul agent documentation](/docs/agent/index) for information about configuring and starting a Consul agent. For hands-on instructions about running Consul agents, refer to the [Getting Started: Run the Consul Agent Tutorial](https://learn.hashicorp.com/tutorials/consul/get-started-agent?in=consul/getting-started).
|
||||
|
||||
When running a Consul agent with CTS in production, consider that CTS uses [blocking queries](/api-docs/features/blocking) to monitor task dependencies, such as changes to registered services. This results in multiple long-running TCP connections between CTS and the agent to poll changes for each dependency. Consul may quickly reach the agent connection limits if CTS is monitors a high number of services.
|
||||
When running a Consul agent with CTS in production, consider that CTS uses [blocking queries](/api-docs/features/blocking) to monitor task dependencies, such as changes to registered services. This results in multiple long-running TCP connections between CTS and the agent to poll changes for each dependency. Consul may quickly reach the agent connection limits if CTS is monitoring a high number of services.
|
||||
|
||||
To avoid reaching the limit prematurely, we recommend using HTTP/2 (requires HTTPS) to communicate between CTS and the Consul agent. When using HTTP/2, CTS establishes a single connection and reuses it for all communication. Refer to the [Consul Configuration section](/docs/nia/configuration#consul) for details.
|
||||
|
||||
|
@ -41,7 +41,7 @@ Alternatively, you can configure the [`limits.http_max_conns_per_client`](/docs/
|
|||
|
||||
### Register services
|
||||
|
||||
CTS monitors the Consul catalog for service changes that lead to downstream changes to your network devices. Without services, your CTS daemon will be operational but idle. You can register services with your Consul agent by either loading a service definition or by sending an HTTP API request.
|
||||
CTS monitors the Consul catalog for service changes that lead to downstream changes to your network devices. Without services, your CTS daemon is operational but idle. You can register services with your Consul agent by either loading a service definition or by sending an HTTP API request.
|
||||
|
||||
The following HTTP API request example registers a service named `web` with your Consul agent:
|
||||
|
||||
|
@ -68,7 +68,7 @@ For hands-on instructions on registering a service by loading a service definiti
|
|||
|
||||
For production environments, we recommend operating a Consul cluster rather than a single agent. Refer to [Getting Started: Create a Local Consul Datacenter](https://learn.hashicorp.com/tutorials/consul/get-started-create-datacenter?in=consul/getting-started) for instructions on starting multiple Consul agents and joining them into a cluster.
|
||||
|
||||
## Network infrastructure (using a Terraform provider)
|
||||
## Network infrastructure using a Terraform provider
|
||||
|
||||
CTS integrations for the Terraform driver use Terraform providers as plugins to interface with specific network infrastructure platforms. The Terraform driver for CTS inherits the expansive collection of Terraform providers to integrate with. You can also specify a provider `source` in the [`required_providers` configuration](https://www.terraform.io/language/providers/requirements#requiring-providers) to use providers written by the community (requires Terraform 0.13 or later).
|
||||
|
||||
|
@ -78,16 +78,16 @@ To find providers for the infrastructure platforms you use, browse the providers
|
|||
|
||||
### How to create a provider
|
||||
|
||||
If a Terraform provider does not exist for your environment, you can create a new Terraform provider and publish it to the registery so that you can use it within a network integration task or createa compatible Terraform module. Refer to the following Terraform tutorial and documentation for additional information on creating and publishing providers:
|
||||
If a Terraform provider does not exist for your environment, you can create a new Terraform provider and publish it to the registry so that you can use it within a network integration task or create a compatible Terraform module. Refer to the following Terraform tutorial and documentation for additional information on creating and publishing providers:
|
||||
|
||||
* [Setup and Implement Read](https://learn.hashicorp.com/tutorials/terraform/provider-setup)
|
||||
* [Publishing Providers](https://www.terraform.io/docs/registry/providers/publishing.html).
|
||||
- [Setup and Implement Read](https://learn.hashicorp.com/tutorials/terraform/provider-setup)
|
||||
- [Publishing Providers](https://www.terraform.io/docs/registry/providers/publishing.html).
|
||||
|
||||
## Network integration (using a Terraform module)
|
||||
## Network integration using a Terraform module
|
||||
|
||||
The Terraform module for a task in CTS is the core component of the integration. It declares which resources and how your infrastructure is dynamically updated. The module, along with how it is configured within a task, determines the condition under which your infrastructure is updated.
|
||||
The Terraform module for a task in CTS is the core component of the integration. It declares which resources to use and how your infrastructure is dynamically updated. The module, along with how it is configured within a task, determines the conditions under which your infrastructure is updated.
|
||||
|
||||
Working with a Terraform provider, you can write an integration task for CTS by [creating a Terraform module](/docs/nia/terraform-modules) that is compatible with the Terraform driver or use a module built by partners below.
|
||||
Working with a Terraform provider, you can write an integration task for CTS by [creating a Terraform module](/docs/nia/terraform-modules) that is compatible with the Terraform driver. You can also use a [module built by partners](#partner-terraform-modules).
|
||||
|
||||
Refer to [Configuration](/docs/nia/configuration) for information about configuring CTS and how to use Terraform providers and modules for tasks.
|
||||
|
||||
|
@ -118,7 +118,7 @@ The modules listed below are available to use and are compatible with CTS.
|
|||
|
||||
#### Citrix ADC
|
||||
|
||||
- Create, Update and Delete Service Groups in Citrix ADC: [Terraform Registry](https://registry.terraform.io/modules/citrix/servicegroup-consul-sync-nia/citrixadc/latest) / [GitHub](https://github.com/citrix/terraform-citrixadc-servicegroup-consul-sync-nia)
|
||||
- Create, Update, and Delete Service Groups in Citrix ADC: [Terraform Registry](https://registry.terraform.io/modules/citrix/servicegroup-consul-sync-nia/citrixadc/latest) / [GitHub](https://github.com/citrix/terraform-citrixadc-servicegroup-consul-sync-nia)
|
||||
|
||||
#### F5
|
||||
|
||||
|
@ -126,7 +126,7 @@ The modules listed below are available to use and are compatible with CTS.
|
|||
|
||||
#### NS1
|
||||
|
||||
- Create, Delete and Update DNS Records and Zones: [Terraform Registry](https://registry.terraform.io/modules/ns1-terraform/record-sync-nia/ns1/latest) / [GitHub](https://github.com/ns1-terraform/terraform-ns1-record-sync-nia)
|
||||
- Create, Delete, and Update DNS Records and Zones: [Terraform Registry](https://registry.terraform.io/modules/ns1-terraform/record-sync-nia/ns1/latest) / [GitHub](https://github.com/ns1-terraform/terraform-ns1-record-sync-nia)
|
||||
|
||||
#### Palo Alto Networks
|
||||
|
||||
|
|
|
@ -11,7 +11,7 @@ This topic describes how to run Consul-Terraform-Sync (CTS) configured for high
|
|||
|
||||
## Introduction
|
||||
|
||||
A network always has exactly one instance of the CTS cluster that is the designated leader. The leader is responsible for monitoring and running tasks. If the leader fails, CTS triggers the following process if it is configured for high availability:
|
||||
A network always has exactly one instance of the CTS cluster that is the designated leader. The leader is responsible for monitoring and running tasks. If the leader fails, CTS triggers the following process when it is configured for high availability:
|
||||
|
||||
1. The CTS cluster promotes a new leader from the pool of followers in the network.
|
||||
1. The new leader begins running all existing tasks in `once-mode` in order to process changes that occurred during the failover transition period. In this mode, CTS runs all existing tasks one time.
|
||||
|
@ -29,11 +29,11 @@ The following diagram shows the CTS cluster state after the leader stops. CTS In
|
|||
|
||||
### Failover details
|
||||
|
||||
* The time it takes for a new leader to be elected is determined by the `high_availability.cluster.storage.session_ttl` configuration. The minimum failover time is equal to the `session_ttl` value. The maximum failover time is double the `session_ttl` value.
|
||||
* If failover occurs during task execution, a new leader is elected and will attempt to run all tasks once before continuing to monitor for changes.
|
||||
* If using the [Terraform Cloud (TFC) driver](/docs/nia/network-drivers/terraform-cloud), the task finishes and CTS starts a new leader that attempts to queue a run for each task in TFC in once-mode.
|
||||
* If using [Terraform driver](/docs/nia/network-drivers/terraform), the task may complete depending on the cause of the failover. The new leader starts and attempts to run each task in [once-mode](/docs/nia/cli/start#modes). Depending on the module and provider, the task may require manual intervention to fix any inconsistencies between the infrastructure and Terraform state.
|
||||
* If failover occurs when no task is executing, CTS elects a new leader that attempts to run all tasks in once-mode.
|
||||
- The time it takes for a new leader to be elected is determined by the `high_availability.cluster.storage.session_ttl` configuration. The minimum failover time is equal to the `session_ttl` value. The maximum failover time is double the `session_ttl` value.
|
||||
- If failover occurs during task execution, a new leader is elected. The new leader will attempt to run all tasks once before continuing to monitor for changes.
|
||||
- If using the [Terraform Cloud (TFC) driver](/docs/nia/network-drivers/terraform-cloud), the task finishes and CTS starts a new leader that attempts to queue a run for each task in TFC in once-mode.
|
||||
- If using [Terraform driver](/docs/nia/network-drivers/terraform), the task may complete depending on the cause of the failover. The new leader starts and attempts to run each task in [once-mode](/docs/nia/cli/start#modes). Depending on the module and provider, the task may require manual intervention to fix any inconsistencies between the infrastructure and Terraform state.
|
||||
- If failover occurs when no task is executing, CTS elects a new leader that attempts to run all tasks in once-mode.
|
||||
|
||||
Note that driver behavior is consistent whether or not CTS is running in high availability mode.
|
||||
|
||||
|
@ -46,7 +46,7 @@ Note that driver behavior is consistent whether or not CTS is running in high av
|
|||
|
||||
You must configure appropriate ACL permissions for your cluster. Refer to [ACL permissions](#) for details.
|
||||
|
||||
It’s not required, but we recommend specifying the [TFC driver](/docs/nia/network-drivers/terraform-cloud) in your CTS configuration if you want to run in high availability mode.
|
||||
We recommend specifying the [TFC driver](/docs/nia/network-drivers/terraform-cloud) in your CTS configuration if you want to run in high availability mode.
|
||||
|
||||
## Configuration
|
||||
|
||||
|
@ -97,7 +97,7 @@ We recommend deploying a cluster that includes three CTS instances. This is so t
|
|||
|
||||
## Modify an instance configuration
|
||||
|
||||
You can implement a rolling update to update a non-task configuration for a CTS instance, such as the Consul connection settings, when high availability is enabled. If you need to update a task in the instance configuration, refer to [Modify tasks](#modify-tasks).
|
||||
You can implement a rolling update to update a non-task configuration for a CTS instance, such as the Consul connection settings. If you need to update a task in the instance configuration, refer to [Modify tasks](#modify-tasks).
|
||||
|
||||
1. Identify the leader CTS instance by either making a call to the [`status` API endpoint](/docs/nia/cli/start) or by checking the logs for the following entry:
|
||||
```shell-session
|
||||
|
@ -111,7 +111,9 @@ You can implement a rolling update to update a non-task configuration for a CTS
|
|||
|
||||
## Modify tasks
|
||||
|
||||
When high availability is enabled, CTS persists task and event data (refer to State storage and persistence for additional information). You can use the following methods for modifying tasks when high availability is enabled. We recommend choosing a single method to make all task configuration changes. This is to limit inconsistencies between the state and the configuration that can occur when mixing methods.
|
||||
When high availability is enabled, CTS persists task and event data. Refer to [State storage and persistence](/docs/nia/architecture#state-storage-and-persistence) for additional information.
|
||||
|
||||
You can use the following methods for modifying tasks when high availability is enabled. We recommend choosing a single method to make all task configuration changes because inconsistencies between the state and the configuration can occur when mixing methods.
|
||||
|
||||
### Delete and recreate the task (recommended)
|
||||
|
||||
|
@ -137,19 +139,9 @@ Use the CTS API to identify the CTS leader instance and delete and replace a tas
|
|||
--data @payload.json \
|
||||
localhost:8558/v1/tasks
|
||||
```
|
||||
|
||||
You can also use the [`task-create` command](/docs/nia/cli/task#task-create) to complete this step.
|
||||
|
||||
Send a `POST` call to the `/task/<task-name>` endpoint and include the updated task in your payload.
|
||||
|
||||
```shell-session
|
||||
$curl --header "Content-Type: application/json" \
|
||||
--request POST \
|
||||
--data @payload.json \
|
||||
localhost:8558/v1/tasks
|
||||
```
|
||||
|
||||
You can also use the `task-create` command to complete this step.
|
||||
|
||||
### Discard data with the `-reset-storage` flag
|
||||
|
||||
You can restart the CTS cluster using the [`-reset-storage` flag](/docs/nia/cli/options) to discard persisted data if you need to update a task.
|
||||
|
@ -181,4 +173,4 @@ Use the following troubleshooting procedure if a previous leader had been runnin
|
|||
1. Check for differences between the previous leader and new leader, such as differences in configurations, environment variables, and local resources.
|
||||
1. Start a new instance with the fix that resolves the issue.
|
||||
1. Tear down the leader instance that has the issue and any other instances that may have the same issue.
|
||||
1. Restart the instance(s) to implement the fix.
|
||||
1. Restart the affected instances to implement the fix.
|
||||
|
|
|
@ -23,7 +23,7 @@ This topic describes the basic procedure for running Consul-Terraform-Sync (CTS)
|
|||
$ consul-terraform-sync start -config-file <config.hcl>
|
||||
```
|
||||
|
||||
4. Check status of tasks. Replace port number if configured in Step 2. See additional API endpoints [here](/docs/nia/api)
|
||||
4. Check status of tasks. Replace port number if configured in Step 2. Refer to [Consul-Terraform-Sync API](/docs/nia/api) for additional information.
|
||||
|
||||
```shell-session
|
||||
$ curl localhost:8558/status/tasks
|
||||
|
|
Loading…
Reference in New Issue