A task is the translation of dynamic service information from the Consul Catalog into network infrastructure changes downstream. Consul-Terraform-Sync carries out automation for executing tasks using network drivers. For a Terraform driver, the scope of a task is a Terraform module.
Below is an example task configuration:
```hcl
task {
name = "frontend-firewall-policies"
description = "Add firewall policy rules for frontend services"
In the example task above, the "fake-firewall" and "null" providers, listed in the `providers` field, are used. These providers themselves should be configured in their own separate [`terraform_provider` blocks](/docs/nia/configuration#terraform-provider). These providers are used in the Terraform module "example/firewall-policy/module", configured in the `source` field, to create, update, and destroy resources. This module may do something like use the providers to create and destroy firewall policy objects based on IP addresses. The IP addresses come from the "web" and "image" service instances configured in the `services` field. This service-level information is retrieved by Consul-Terraform-Sync which watches Consul catalog for changes.
A task is executed when any change of information for services the task is configured for is detected from the Consul catalog. Execution could include one or more changes to service values, like IP address, added or removed service instance, or tags. A complete list of values that would cause a task to run are expanded below:
Consul-Terraform-Sync automatically generates any files needed to execute the network driver for each task. See [network drivers](/docs/nia/network-drivers) for more details on the files generated for the Terraform driver.
Consul-Terraform-Sync will attempt to execute each task once upon startup to synchronize infrastructure with the current state of Consul. The daemon will stop and exit if any error occurs while preparing the automation environment or executing a task for the first time. This helps ensure all tasks have proper configuration and are executable before the daemon transitions into running tasks in full automation as service changes are discovered over time. After all tasks have successfully executed once, task failures during automation will be logged and retried or attempted again after a subsequent change.
Tasks are executed near-real time when service changes are detected. For services or environments that are prone to flapping, it may be useful to configure a [buffer period](/docs/nia/configuration#buffer_period-1) for a task to accumulate changes before it is executed. The buffer period would reduce the number of consecutive network calls to infrastructure by batching changes for a task over a short duration of time.
## Status Information
Status-related information is collected and offered via [status API](/docs/nia/api#status) to provide visibility into what and how the tasks are running. Information is offered in three-levels (lowest to highest):
These three levels form a hierarchy where each level of data informs the one higher. The lowest-level, event data, is collected each time a task runs to update network infrastructure. This event data is then aggregated to inform individual task statuses. The count distribution of all the task statuses inform the overall status's task summary.
### Event
Each time a task's services has an update, Consul-Terraform-Sync takes a series of steps in order to update network infrastructure. This process starts with updating the task's templates to fetch new service data from Consul and ends with any post-actions after modifying network infrastructure. An event is a data structure that captures information on this process of updating network infrastructure. It stores information to help understand if the update to network infrastructure was successful or not, and it stores any errors that occurred.
"message": "example error: error while doing terraform-apply"
},
...
}
```
For complete information on the event structure, see [events in our API documentation](/docs/nia/api#event). Event information can be retrieved by using the [`include=events` parameter](/docs/nia/api#include) with the [task status API](/docs/nia/api#task-status).
### Task Status
Each time a task runs to update network infrastructure, event data is stored for that run. 5 most recent events are stored for each task, and these stored events are used to determine task status. For example, if the most recent stored event is not successful but the others are, then the task's health status is "errored".
Task status information can be retrieved with [task status API](/docs/nia/api#task-status). The API documentation includes details on what health statuses are available and how it is calculated based on events' success/failure information.
### Overall Status
Overall status returns a summary of the health statuses across all tasks. The summary is the count of tasks in each health status category.
Overall status information can be retrieved with [overall status API](/docs/nia/api#overall-status). The API documentation includes details on what health statuses are available and how it is calculated based on task statuses' health status information.