terraform: remove modules in repo (#5085)

* terraform: remove modules in repo

These are not currently being maintained and tested, and were
created prior to the existence of the Terraform Module Registry,
which is the more appropriate way to share and distribute modules.

In an effort to limit confusion of the purpose of these modules and
not encourage usage of something we aren't confident about, this
removes them from this repository.

You can still access these modules if you depend on them by pinning to
a specific ref in Git.

It is recommended you pin against a recent major version where
these modules existed:

```
module "consul-aws" {
  source = "git::https://github.com/hashicorp/consul.git//terraform/aws?ref=v1.4.0"
}
```

More detail about module sources can be found on this page:

https://www.terraform.io/docs/modules/sources.html

* terraform: add a readme for anyone who can't find the modules
This commit is contained in:
Jack Pearkes 2019-04-04 16:31:43 -07:00 committed by GitHub
parent f45e495e38
commit 152aa0cee1
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
30 changed files with 12 additions and 900 deletions

View File

@ -1,7 +1,14 @@
# Terraform Modules
These Terraform modules were removed from GitHub in [GH-5085](https://github.com/hashicorp/consul/pull/5085).
This folder contains modules for Terraform that can setup Consul for
various systems. The infrastructure provider that is used is designated
by the folder above. See the `variables.tf` file in each for more documentation.
These are not currently being maintained and tested, and were created prior to the existence of the Terraform Module Registry, which is the more appropriate way to share and distribute modules.
To deploy Consul in multiple Subnets/AZ on AWS - supply: -var 'vpc_id=vpc-1234567' -var 'subnets={ "0" = "subnet-12345", "1" = "subnet-23456", "2" = "subnet-34567"}'
In an effort to limit confusion of the purpose of these modules and not encourage usage of something we aren't confident about, this removes them from this repository.
You can still access these modules if you depend on them by pinning to a specific ref in Git. It is recommended you pin against a recent major version where these modules existed:
module "consul-aws" {
source = "git::https://github.com/hashicorp/consul.git//terraform/aws?ref=v1.4.0"
}
More detail about module sources can be found on this page:
https://www.terraform.io/docs/modules/sources.html

View File

@ -1,26 +0,0 @@
## Running the aws templates to set up a consul cluster
The platform variable defines the target OS (which in turn controls whether we install the Consul service via `systemd` or `upstart`). Options include:
- `ubuntu` (default)
- `rhel6`
- `rhel7`
- `centos6`
- `centos7`
For AWS provider, set up your AWS environment as outlined in https://www.terraform.io/docs/providers/aws/index.html
To set up ubuntu based, run the following command, taking care to replace `key_name` and `key_path` with actual values:
`terraform apply -var 'key_name=consul' -var 'key_path=/Users/xyz/consul.pem'`
or
`terraform apply -var 'key_name=consul' -var 'key_path=/Users/xyz/consul.pem' -var 'platform=ubuntu'`
For CentOS7:
`terraform apply -var 'key_name=consul' -var 'key_path=/Users/xyz/consul.pem' -var 'platform=centos7'`
For centos6 platform, for the default AMI, you need to accept the AWS market place terms and conditions. When you launch first time, you will get an error with an URL to accept the terms and conditions.

View File

@ -1,76 +0,0 @@
resource "aws_instance" "server" {
ami = "${lookup(var.ami, "${var.region}-${var.platform}")}"
instance_type = "${var.instance_type}"
key_name = "${var.key_name}"
count = "${var.servers}"
security_groups = ["${aws_security_group.consul.id}"]
subnet_id = "${lookup(var.subnets, count.index % var.servers)}"
connection {
user = "${lookup(var.user, var.platform)}"
private_key = "${file("${var.key_path}")}"
}
#Instance tags
tags {
Name = "${var.tagName}-${count.index}"
ConsulRole = "Server"
}
provisioner "file" {
source = "${path.module}/../shared/scripts/${lookup(var.service_conf, var.platform)}"
destination = "/tmp/${lookup(var.service_conf_dest, var.platform)}"
}
provisioner "remote-exec" {
inline = [
"echo ${var.servers} > /tmp/consul-server-count",
"echo ${aws_instance.server.0.private_ip} > /tmp/consul-server-addr",
]
}
provisioner "remote-exec" {
scripts = [
"${path.module}/../shared/scripts/install.sh",
"${path.module}/../shared/scripts/service.sh",
"${path.module}/../shared/scripts/ip_tables.sh",
]
}
}
resource "aws_security_group" "consul" {
name = "consul_${var.platform}"
description = "Consul internal traffic + maintenance."
vpc_id = "${var.vpc_id}"
// These are for internal traffic
ingress {
from_port = 0
to_port = 65535
protocol = "tcp"
self = true
}
ingress {
from_port = 0
to_port = 65535
protocol = "udp"
self = true
}
// These are for maintenance
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
// This is for outbound internet access
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}

View File

@ -1,3 +0,0 @@
output "server_address" {
value = "${aws_instance.server.0.public_dns}"
}

View File

@ -1,105 +0,0 @@
variable "platform" {
default = "ubuntu"
description = "The OS Platform"
}
variable "user" {
default = {
ubuntu = "ubuntu"
rhel6 = "ec2-user"
centos6 = "centos"
centos7 = "centos"
rhel7 = "ec2-user"
}
}
variable "ami" {
description = "AWS AMI Id, if you change, make sure it is compatible with instance type, not all AMIs allow all instance types "
default = {
ap-south-1-ubuntu = "ami-08a5e367"
us-east-1-ubuntu = "ami-d651b8ac"
ap-northeast-1-ubuntu = "ami-8422ebe2"
eu-west-1-ubuntu = "ami-17d11e6e"
ap-southeast-1-ubuntu = "ami-e6d3a585"
ca-central-1-ubuntu = "ami-e59c2581"
us-west-1-ubuntu = "ami-2d5c6d4d"
eu-central-1-ubuntu = "ami-5a922335"
sa-east-1-ubuntu = "ami-a3e39ecf"
ap-southeast-2-ubuntu = "ami-391ff95b"
eu-west-2-ubuntu = "ami-e1f2e185"
ap-northeast-2-ubuntu = "ami-0f6fb461"
us-west-2-ubuntu = "ami-ecc63a94"
us-east-2-ubuntu = "ami-9686a4f3"
us-east-1-rhel6 = "ami-0d28fe66"
us-east-2-rhel6 = "ami-aff2a9ca"
us-west-2-rhel6 = "ami-3d3c0a0d"
us-east-1-centos6 = "ami-57cd8732"
us-east-2-centos6 = "ami-c299c2a7"
us-west-2-centos6 = "ami-1255b321"
us-east-1-rhel7 = "ami-2051294a"
us-east-2-rhel7 = "ami-0a33696f"
us-west-2-rhel7 = "ami-775e4f16"
us-east-1-centos7 = "ami-6d1c2007"
us-east-2-centos7 = "ami-6a2d760f"
us-west-1-centos7 = "ami-af4333cf"
}
}
variable "service_conf" {
default = {
ubuntu = "debian_consul.service"
rhel6 = "rhel_upstart.conf"
centos6 = "rhel_upstart.conf"
centos7 = "rhel_consul.service"
rhel7 = "rhel_consul.service"
}
}
variable "service_conf_dest" {
default = {
ubuntu = "consul.service"
rhel6 = "upstart.conf"
centos6 = "upstart.conf"
centos7 = "consul.service"
rhel7 = "consul.service"
}
}
variable "key_name" {
description = "SSH key name in your AWS account for AWS instances."
}
variable "key_path" {
description = "Path to the private key specified by key_name."
}
variable "region" {
default = "us-east-1"
description = "The region of AWS, for AMI lookups."
}
variable "servers" {
default = "3"
description = "The number of Consul servers to launch."
}
variable "instance_type" {
default = "t2.micro"
description = "AWS Instance type, if you change, make sure it is compatible with AMI, not all AMIs allow all instance types "
}
variable "tagName" {
default = "consul"
description = "Name tag for the servers"
}
variable "subnets" {
type = "map"
description = "map of subnets to deploy your infrastructure in, must have as many keys as your server count (default 3), -var 'subnets={\"0\"=\"subnet-12345\",\"1\"=\"subnets-23456\"}' "
}
variable "vpc_id" {
type = "string"
description = "ID of the VPC to use - in case your account doesn't have default VPC"
}

View File

@ -1,30 +0,0 @@
# Requirements
* Terraform installed
* Digital Ocean account with API key
* SSH key uploaded to Digital Ocean
### Variables
Populate terraform.tfvars as follows (or execute with arguments as shown in Usage)
key_path = "~/.ssh/id_rsa"
do_token = "ASDFQWERTYDERP"
num_instances = "3"
ssh_key_ID = "my_ssh_keyID_in_digital_ocean"
region = "desired DO region"
# Usage
terraform plan \
-var 'key_path=~/.ssh/id_rsa' \
-var 'do_token=ASDFQWERTYDERP' \
-var 'num_instances=3' \
-var 'ssh_key_ID=86:75:30:99:88:88:AA:FF:DD' \
-var 'region=tor1'
terraform apply \
-var 'key_path=~/.ssh/id_rsa' \
-var 'do_token=ASDFQWERTYDERP' \
-var 'num_instances=3' \
-var 'ssh_key_ID=86:75:30:99:88:88:AA:FF:DD' \
-var 'region=tor1'

View File

@ -1,40 +0,0 @@
provider "digitalocean" {
token = "${var.do_token}"
}
resource "digitalocean_droplet" "consul" {
ssh_keys = ["${var.ssh_key_ID}"]
image = "${var.ubuntu}"
region = "${var.region}"
size = "2gb"
private_networking = true
name = "consul${count.index + 1}"
count = "${var.num_instances}"
connection {
type = "ssh"
private_key = "${file("${var.key_path}")}"
user = "root"
timeout = "2m"
}
provisioner "file" {
source = "${path.module}/../shared/scripts/debian_upstart.conf"
destination = "/tmp/upstart.conf"
}
provisioner "remote-exec" {
inline = [
"echo ${var.num_instances} > /tmp/consul-server-count",
"echo ${digitalocean_droplet.consul.0.ipv4_address} > /tmp/consul-server-addr",
]
}
provisioner "remote-exec" {
scripts = [
"${path.module}/../shared/scripts/install.sh",
"${path.module}/../shared/scripts/service.sh",
"${path.module}/../shared/scripts/ip_tables.sh",
]
}
}

View File

@ -1,7 +0,0 @@
output "first_consul_node_address" {
value = "${digitalocean_droplet.consul.0.ipv4_address}"
}
output "all_addresses" {
value = ["${digitalocean_droplet.consul.*.ipv4_address}"]
}

View File

@ -1,5 +0,0 @@
key_path = "~/.ssh/id_rsa"
ssh_key_ID = "my_ssh_key_ID_or_fingerprint_NOT_SSH_KEY_NAME"
do_token = "ASDFQWERTYDERP"
num_instances = "3"
region = "tor1"

View File

@ -1,26 +0,0 @@
variable "do_token" {}
variable "key_path" {}
variable "ssh_key_ID" {}
variable "region" {}
variable "num_instances" {}
# Default OS
variable "ubuntu" {
description = "Default LTS"
default = "ubuntu-14-04-x64"
}
variable "centos" {
description = "Default Centos"
default = "centos-72-x64"
}
variable "coreos" {
description = "Default Coreos"
default = "coreos-899.17.0"
}

View File

@ -1,33 +0,0 @@
## Running the Google Cloud Platform templates to set up a Consul cluster
The platform variable defines the target OS, default is `ubuntu`.
Supported Machine Images:
- Ubuntu 14.04 (`ubuntu`)
- RHEL6 (`rhel6`)
- RHEL7 (`rhel7`)
- CentOS6 (`centos6`)
- CentOS7 (`centos7`)
For Google Cloud provider, set up your environment as outlined here: https://www.terraform.io/docs/providers/google/index.html
To set up a Ubuntu based cluster, replace `key_path` with actual value and run:
```shell
terraform apply -var 'key_path=/Users/xyz/consul.pem'
```
_or_
```shell
terraform apply -var 'key_path=/Users/xyz/consul.pem' -var 'platform=ubuntu'
```
To run RHEL6, run like below:
```shell
terraform apply -var 'key_path=/Users/xyz/consul.pem' -var 'platform=rhel6'
```
**Note:** For RHEL and CentOS based clusters, you need to have a [SSH key added](https://console.cloud.google.com/compute/metadata/sshKeys) for the user `root`.

View File

@ -1,69 +0,0 @@
resource "google_compute_instance" "consul" {
count = "${var.servers}"
name = "consul-${count.index}"
zone = "${var.region_zone}"
tags = ["${var.tag_name}"]
machine_type = "${var.machine_type}"
disk {
image = "${lookup(var.machine_image, var.platform)}"
}
network_interface {
network = "default"
access_config {
# Ephemeral
}
}
service_account {
scopes = ["https://www.googleapis.com/auth/compute.readonly"]
}
connection {
user = "${lookup(var.user, var.platform)}"
private_key = "${file("${var.key_path}")}"
}
provisioner "file" {
source = "${path.module}/../shared/scripts/${lookup(var.service_conf, var.platform)}"
destination = "/tmp/${lookup(var.service_conf_dest, var.platform)}"
}
provisioner "remote-exec" {
inline = [
"echo ${var.servers} > /tmp/consul-server-count",
"echo ${google_compute_instance.consul.0.network_interface.0.address} > /tmp/consul-server-addr",
]
}
provisioner "remote-exec" {
scripts = [
"${path.module}/../shared/scripts/install.sh",
"${path.module}/../shared/scripts/service.sh",
"${path.module}/../shared/scripts/ip_tables.sh",
]
}
}
resource "google_compute_firewall" "consul_ingress" {
name = "consul-internal-access"
network = "default"
allow {
protocol = "tcp"
ports = [
"8300", # Server RPC
"8301", # Serf LAN
"8302", # Serf WAN
"8400", # RPC
]
}
source_tags = ["${var.tag_name}"]
target_tags = ["${var.tag_name}"]
}

View File

@ -1,3 +0,0 @@
output "server_address" {
value = "${google_compute_instance.consul.0.network_interface.0.address}"
}

View File

@ -1,73 +0,0 @@
variable "platform" {
default = "ubuntu"
description = "The OS Platform"
}
variable "user" {
default = {
ubuntu = "ubuntu"
rhel6 = "root"
rhel7 = "root"
centos6 = "root"
centos7 = "root"
}
}
variable "machine_image" {
default = {
ubuntu = "ubuntu-os-cloud/ubuntu-1404-trusty-v20160314"
rhel6 = "rhel-cloud/rhel-6-v20160303"
rhel7 = "rhel-cloud/rhel-7-v20160303"
centos6 = "centos-cloud/centos-6-v20160301"
centos7 = "centos-cloud/centos-7-v20160301"
}
}
variable "service_conf" {
default = {
ubuntu = "debian_upstart.conf"
rhel6 = "rhel_upstart.conf"
rhel7 = "rhel_consul.service"
centos6 = "rhel_upstart.conf"
centos7 = "rhel_consul.service"
}
}
variable "service_conf_dest" {
default = {
ubuntu = "upstart.conf"
rhel6 = "upstart.conf"
rhel7 = "consul.service"
centos6 = "upstart.conf"
centos7 = "consul.service"
}
}
variable "key_path" {
description = "Path to the private key used to access the cloud servers"
}
variable "region" {
default = "us-central1"
description = "The region of Google Cloud where to launch the cluster"
}
variable "region_zone" {
default = "us-central1-f"
description = "The zone of Google Cloud in which to launch the cluster"
}
variable "servers" {
default = "3"
description = "The number of Consul servers to launch"
}
variable "machine_type" {
default = "f1-micro"
description = "Google Cloud Compute machine type"
}
variable "tag_name" {
default = "consul"
description = "Name tag for the servers"
}

View File

@ -1,30 +0,0 @@
#+AUTHOR: parasitid@yahoo.fr
#+TITLE: Terraforming consul on Openstack
* 1. Pre-requisites
- Populate all variables in your terraform.tfvars
#+BEGIN_SRC terraform
username = "..."
password = "..."
tenant_name = "..."
auth_url = "https://myopenstackprovider.com/identity/v2.0"
public_key = "ssh-rsa AAAAB..."
key_file_path = "..."
#+END_SRC
- Change regions, networks, flavor and image ids in the variables.tf
according to your openstack settings
- Use an "upstart" compatible image for your consul nodes
* 2. Test it
: terraform apply
* 3. Terraform as a module
You should now be able to use openstack as a provider for the consul module.
#+BEGIN_SRC terraform
module "consul" {
source = "github.com/hashicorp/consul/terraform/openstack"
servers = 3
}
#+END_SRC

View File

@ -1,60 +0,0 @@
provider "openstack" {
user_name = "${var.username}"
tenant_name = "${var.tenant_name}"
password = "${var.password}"
auth_url = "${var.auth_url}"
}
resource "openstack_compute_keypair_v2" "consul_keypair" {
name = "consul-keypair"
region = "${var.region}"
public_key = "${var.public_key}"
}
resource "openstack_compute_floatingip_v2" "consul_ip" {
region = "${var.region}"
pool = "${lookup(var.pub_net_id, var.region)}"
count = "${var.servers}"
}
resource "openstack_compute_instance_v2" "consul_node" {
name = "consul-node-${count.index}"
region = "${var.region}"
image_id = "${lookup(var.image, var.region)}"
flavor_id = "${lookup(var.flavor, var.region)}"
floating_ip = "${element(openstack_compute_floatingip_v2.consul_ip.*.address,count.index)}"
key_pair = "consul-keypair"
count = "${var.servers}"
connection {
user = "${var.user_login}"
key_file = "${var.key_file_path}"
timeout = "1m"
}
provisioner "file" {
source = "${path.module}/scripts/upstart.conf"
destination = "/tmp/upstart.conf"
}
provisioner "file" {
source = "${path.module}/scripts/upstart-join.conf"
destination = "/tmp/upstart-join.conf"
}
provisioner "remote-exec" {
inline = [
"echo ${var.servers} > /tmp/consul-server-count",
"echo ${count.index} > /tmp/consul-server-index",
"echo ${openstack_compute_instance_v2.consul_node.0.network.0.fixed_ip_v4} > /tmp/consul-server-addr",
]
}
provisioner "remote-exec" {
scripts = [
"${path.module}/scripts/install.sh",
"${path.module}/scripts/server.sh",
"${path.module}/scripts/service.sh",
]
}
}

View File

@ -1,3 +0,0 @@
output "nodes_floating_ips" {
value = "${join(\",\", openstack_compute_instance_v2.consul_node.*.floating_ip)}"
}

View File

@ -1,37 +0,0 @@
#!/usr/bin/env bash
set -e
# Read the address to join from the file we provisioned
JOIN_ADDRS=$(cat /tmp/consul-server-addr | tr -d '\n')
# consul version to install
CONSUL_VERSION=0.6.4
sudo sh -c 'echo "127.0.0.1 consul-node-'$(cat /tmp/consul-server-index)'" >> /etc/hosts'
echo "Installing dependencies..."
sudo apt-get update -y
sudo apt-get install -y unzip
echo "Fetching Consul..."
cd /tmp
wget "https://releases.hashicorp.com/consul/${CONSUL_VERSION}/consul_${CONSUL_VERSION}_linux_amd64.zip" -O consul.zip
echo "Installing Consul..."
unzip consul.zip >/dev/null
sudo chmod +x consul
sudo mv consul /usr/local/bin/consul
sudo mkdir -p /etc/consul.d
sudo mkdir -p /mnt/consul
sudo mkdir -p /etc/service
# Setup the join address
cat >/tmp/consul-join << EOF
export CONSUL_JOIN="${JOIN_ADDRS}"
EOF
sudo mv /tmp/consul-join /etc/service/consul-join
chmod 0644 /etc/service/consul-join
echo "Installing Upstart service..."
sudo mv /tmp/upstart.conf /etc/init/consul.conf
sudo mv /tmp/upstart-join.conf /etc/init/consul-join.conf

View File

@ -1,14 +0,0 @@
#!/usr/bin/env bash
set -e
# Read from the file we created
SERVER_COUNT=$(cat /tmp/consul-server-count | tr -d '\n')
# Write the flags to a temporary file
cat >/tmp/consul_flags << EOF
export CONSUL_FLAGS="-server -bootstrap-expect=${SERVER_COUNT} -data-dir=/mnt/consul"
EOF
# Write it to the full service file
sudo mv /tmp/consul_flags /etc/service/consul
chmod 0644 /etc/service/consul

View File

@ -1,5 +0,0 @@
#!/usr/bin/env bash
set -e
echo "Starting Consul..."
sudo start consul

View File

@ -1,25 +0,0 @@
description "Join the consul cluster"
start on started consul
stop on stopped consul
task
script
if [ -f "/etc/service/consul-join" ]; then
. /etc/service/consul-join
fi
# Keep trying to join until it succeeds
set +e
while :; do
logger -t "consul-join" "Attempting join: ${CONSUL_JOIN}"
/usr/local/bin/consul join \
${CONSUL_JOIN} \
>>/var/log/consul-join.log 2>&1
[ $? -eq 0 ] && break
sleep 5
done
logger -t "consul-join" "Join success!"
end script

View File

@ -1,24 +0,0 @@
description "Consul agent"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
script
if [ -f "/etc/service/consul" ]; then
. /etc/service/consul
fi
# Make sure to use all our CPUs, because Consul can block a scheduler thread
export GOMAXPROCS=`nproc`
# Get the public IP
BIND=`ifconfig eth0 | grep "inet addr" | awk '{ print substr($2,6) }'`
exec /usr/local/bin/consul agent \
-config-dir="/etc/consul.d" \
-bind=$BIND \
${CONSUL_FLAGS} \
>>/var/log/consul.log 2>&1
end script

View File

@ -1,46 +0,0 @@
variable "username" {}
variable "password" {}
variable "tenant_name" {}
variable "auth_url" {}
variable "public_key" {}
variable "user_login" {
default = "stack"
}
variable "key_file_path" {}
variable "nb_of_nodes" {
default = "4"
}
variable "pub_net_id" {
default = {
tr2 = "PublicNetwork-01"
tr2-1 = ""
}
}
variable "region" {
default = "tr2"
description = "The region of openstack, for image/flavor/network lookups."
}
variable "image" {
default = {
tr2 = "eee08821-c95a-448f-9292-73908c794661"
tr2-1 = ""
}
}
variable "flavor" {
default = {
tr2 = "100"
tr2-1 = ""
}
}
variable "servers" {
default = "3"
description = "The number of Consul servers to launch."
}

View File

@ -1,13 +0,0 @@
[Unit]
Description=consul agent
Requires=network-online.target
After=network-online.target
[Service]
EnvironmentFile=-/etc/sysconfig/consul
Restart=on-failure
ExecStart=/usr/local/bin/consul agent $CONSUL_FLAGS -config-dir=/etc/systemd/system/consul.d
ExecReload=/bin/kill -HUP $MAINPID
[Install]
WantedBy=multi-user.target

View File

@ -1,24 +0,0 @@
description "Consul agent"
start on started networking
stop on runlevel [!2345]
respawn
# This is to avoid Upstart re-spawning the process upon `consul leave`
normal exit 0 INT
script
if [ -f "/etc/service/consul" ]; then
. /etc/service/consul
fi
# Get the local IP
BIND=`ifconfig eth0 | grep "inet addr" | awk '{ print substr($2,6) }'`
exec /usr/local/bin/consul agent \
-config-dir="/etc/consul.d" \
-bind=$BIND \
${CONSUL_FLAGS} \
>>/var/log/consul.log 2>&1
end script

View File

@ -1,54 +0,0 @@
#!/usr/bin/env bash
set -e
echo "Installing dependencies..."
if [ -x "$(command -v apt-get)" ]; then
sudo su -s /bin/bash -c 'sleep 30 && apt-get update && apt-get install unzip' root
else
sudo yum update -y
sudo yum install -y unzip wget
fi
echo "Fetching Consul..."
CONSUL=1.0.0
cd /tmp
wget https://releases.hashicorp.com/consul/${CONSUL}/consul_${CONSUL}_linux_amd64.zip -O consul.zip --quiet
echo "Installing Consul..."
unzip consul.zip >/dev/null
chmod +x consul
sudo mv consul /usr/local/bin/consul
sudo mkdir -p /opt/consul/data
# Read from the file we created
SERVER_COUNT=$(cat /tmp/consul-server-count | tr -d '\n')
CONSUL_JOIN=$(cat /tmp/consul-server-addr | tr -d '\n')
# Write the flags to a temporary file
cat >/tmp/consul_flags << EOF
CONSUL_FLAGS="-server -bootstrap-expect=${SERVER_COUNT} -join=${CONSUL_JOIN} -data-dir=/opt/consul/data"
EOF
if [ -f /tmp/upstart.conf ];
then
echo "Installing Upstart service..."
sudo mkdir -p /etc/consul.d
sudo mkdir -p /etc/service
sudo chown root:root /tmp/upstart.conf
sudo mv /tmp/upstart.conf /etc/init/consul.conf
sudo chmod 0644 /etc/init/consul.conf
sudo mv /tmp/consul_flags /etc/service/consul
sudo chmod 0644 /etc/service/consul
else
echo "Installing Systemd service..."
sudo mkdir -p /etc/sysconfig
sudo mkdir -p /etc/systemd/system/consul.d
sudo chown root:root /tmp/consul.service
sudo mv /tmp/consul.service /etc/systemd/system/consul.service
sudo mv /tmp/consul*json /etc/systemd/system/consul.d/ || echo
sudo chmod 0644 /etc/systemd/system/consul.service
sudo mv /tmp/consul_flags /etc/sysconfig/consul
sudo chown root:root /etc/sysconfig/consul
sudo chmod 0644 /etc/sysconfig/consul
fi

View File

@ -1,13 +0,0 @@
#!/usr/bin/env bash
set -e
sudo iptables -I INPUT -s 0/0 -p tcp --dport 8300 -j ACCEPT
sudo iptables -I INPUT -s 0/0 -p tcp --dport 8301 -j ACCEPT
sudo iptables -I INPUT -s 0/0 -p tcp --dport 8302 -j ACCEPT
sudo iptables -I INPUT -s 0/0 -p tcp --dport 8400 -j ACCEPT
if [ -d /etc/sysconfig ]; then
sudo iptables-save | sudo tee /etc/sysconfig/iptables
else
sudo iptables-save | sudo tee /etc/iptables.rules
fi

View File

@ -1,13 +0,0 @@
[Unit]
Description=consul agent
Requires=network-online.target
After=network-online.target
[Service]
EnvironmentFile=-/etc/sysconfig/consul
Restart=on-failure
ExecStart=/usr/local/bin/consul agent $CONSUL_FLAGS -config-dir=/etc/systemd/system/consul.d
ExecReload=/bin/kill -HUP $MAINPID
[Install]
WantedBy=multi-user.target

View File

@ -1,26 +0,0 @@
description "Consul agent"
start on started network
stop on runlevel [!2345]
respawn
# This is to avoid Upstart re-spawning the process upon `consul leave`
normal exit 0 INT
script
if [ -f "/etc/service/consul" ]; then
. /etc/service/consul
fi
# Make sure to use all our CPUs, because Consul can block a scheduler thread
export GOMAXPROCS=`nproc`
# Get the public IP
BIND=`ifconfig eth0 | grep "inet addr" | awk '{ print substr($2,6) }'`
exec /usr/local/bin/consul agent \
-config-dir="/etc/consul.d" \
-bind=$BIND \
${CONSUL_FLAGS} \
>>/var/log/consul.log 2>&1
end script

View File

@ -1,12 +0,0 @@
#!/usr/bin/env bash
set -e
echo "Starting Consul..."
if [ -x "$(command -v systemctl)" ]; then
echo "using systemctl"
sudo systemctl enable consul.service
sudo systemctl start consul
else
echo "using upstart"
sudo start consul
fi