[docs] Semaphore (#5524)

* Updating and adding headings.

* Update website/source/docs/guides/semaphore.html.md

Co-Authored-By: kaitlincarter-hc <43049322+kaitlincarter-hc@users.noreply.github.com>

* Update website/source/docs/guides/semaphore.html.md

Co-Authored-By: kaitlincarter-hc <43049322+kaitlincarter-hc@users.noreply.github.com>
This commit is contained in:
kaitlincarter-hc 2019-03-26 13:44:20 -05:00 committed by Judith Malnick
parent 74c31c540f
commit c7166ba58b
1 changed files with 43 additions and 39 deletions

View File

@ -8,47 +8,44 @@ description: |-
# Semaphore # Semaphore
This guide demonstrates how to implement a distributed semaphore using the Consul A distributed semaphore can be useful when you want to coordinate many services, while
KV store. This is useful when you want to coordinate many services while restricting access to certain resources. In this guide we will focus on using Consul's support for
restricting access to certain resources. sessions and Consul KV to build a distributed
semaphore. Note, there are a number of ways that a semaphore can be built, we will not cover all the possible methods in this guide.
To complete this guide successfully, you should have familiarity with
[Consul KV](/docs/agent/kv.html) and Consul [sessions](/docs/internals/sessions.html).
~> If you only need mutual exclusion or leader election, ~> If you only need mutual exclusion or leader election,
[this guide](/docs/guides/leader-election.html) [this guide](/docs/guides/leader-election.html)
provides a simpler algorithm that can be used instead. provides a simpler algorithm that can be used instead.
There are a number of ways that a semaphore can be built, so our goal is not to
cover all the possible methods. Instead, we will focus on using Consul's support for
[sessions](/docs/internals/sessions.html). Sessions allow us to build a system that
can gracefully handle failures.
-> **Note:** JSON output in this guide has been pretty-printed for easier reading. Actual values returned from the API will not be formatted. ## Contending Nodes in the Semaphore
## Contending Nodes
Let's imagine we have a set of nodes who are attempting to acquire a slot in the Let's imagine we have a set of nodes who are attempting to acquire a slot in the
semaphore. All nodes that are participating should agree on three decisions: the semaphore. All nodes that are participating should agree on three decisions
prefix in the KV store used to coordinate, a single key to use as a lock,
and a limit on the number of slot holders.
For the prefix we will be using for coordination, a good pattern is simply: - the prefix in the KV store used to coordinate.
- a single key to use as a lock.
- a limit on the number of slot holders.
```text ### Session
service/<service name>
```
We'll abbreviate this pattern as simply `<prefix>` for the rest of this guide. The first step is for each contending node to create a session. Sessions allow us to build a system that
can gracefully handle failures.
The first step is for each contender to create a session. This is done using the This is done using the
[Session HTTP API](/api/session.html#session_create): [Session HTTP API](/api/session.html#session_create).
```text ```sh
curl -X PUT -d '{"Name": "db-semaphore"}' \ curl -X PUT -d '{"Name": "db-semaphore"}' \
http://localhost:8500/v1/session/create http://localhost:8500/v1/session/create
``` ```
This will return a JSON object contain the session ID: This will return a JSON object contain the session ID.
```text ```json
{ {
"ID": "4ca8e74b-6350-7587-addf-a18084928f3c" "ID": "4ca8e74b-6350-7587-addf-a18084928f3c"
} }
@ -56,14 +53,15 @@ This will return a JSON object contain the session ID:
-> **Note:** Sessions by default only make use of the gossip failure detector. That is, the session is considered held by a node as long as the default Serf health check has not declared the node unhealthy. Additional checks can be specified at session creation if desired. -> **Note:** Sessions by default only make use of the gossip failure detector. That is, the session is considered held by a node as long as the default Serf health check has not declared the node unhealthy. Additional checks can be specified at session creation if desired.
### KV Entry for Node Locks
Next, we create a lock contender entry. Each contender creates a kv entry that is tied Next, we create a lock contender entry. Each contender creates a kv entry that is tied
to a session. This is done so that if a contender is holding a slot and fails, its session to a session. This is done so that if a contender is holding a slot and fails, its session
is detached from the key, which can then be detected by the other contenders. is detached from the key, which can then be detected by the other contenders.
Create the contender key by doing an `acquire` on `<prefix>/<session>` via `PUT`. Create the contender key by doing an `acquire` on `<prefix>/<session>` via `PUT`.
This is something like:
```text ```sh
curl -X PUT -d <body> http://localhost:8500/v1/kv/<prefix>/<session>?acquire=<session> curl -X PUT -d <body> http://localhost:8500/v1/kv/<prefix>/<session>?acquire=<session>
``` ```
@ -77,22 +75,22 @@ The call will either return `true` or `false`. If `true`, the contender entry ha
created. If `false`, the contender node was not created; it's likely that this indicates created. If `false`, the contender node was not created; it's likely that this indicates
a session invalidation. a session invalidation.
### Single Key for Coordination
The next step is to create a single key to coordinate which holders are currently The next step is to create a single key to coordinate which holders are currently
reserving a slot. A good choice for this lock key is simply `<prefix>/.lock`. We will reserving a slot. A good choice for this lock key is simply `<prefix>/.lock`. We will
refer to this special coordinating key as `<lock>`. refer to this special coordinating key as `<lock>`.
This is done with: ```sh
```text
curl -X PUT -d <body> http://localhost:8500/v1/kv/<lock>?cas=0 curl -X PUT -d <body> http://localhost:8500/v1/kv/<lock>?cas=0
``` ```
Since the lock is being created, a `cas` index of 0 is used so that the key is only put if it does not exist. Since the lock is being created, a `cas` index of 0 is used so that the key is only put if it does not exist.
`body` should contain both the intended slot limit for the semaphore and the session ids The `body` of the request should contain both the intended slot limit for the semaphore and the session ids
of the current holders (initially only of the creator). A simple JSON body like the following works: of the current holders (initially only of the creator). A simple JSON body like the following works.
```text ```json
{ {
"Limit": 2, "Limit": 2,
"Holders": [ "Holders": [
@ -101,16 +99,18 @@ of the current holders (initially only of the creator). A simple JSON body like
} }
``` ```
The current state of the semaphore is read by doing a `GET` on the entire `<prefix>`: ## Semaphore Management
```text The current state of the semaphore is read by doing a `GET` on the entire `<prefix>`.
```sh
curl http://localhost:8500/v1/kv/<prefix>?recurse curl http://localhost:8500/v1/kv/<prefix>?recurse
``` ```
Within the list of the entries, we should find two keys: the `<lock>` and the Within the list of the entries, we should find two keys: the `<lock>` and the
contender key <prefix>/<session>. contender key <prefix>/<session>.
```text ```json
[ [
{ {
"LockIndex": 0, "LockIndex": 0,
@ -149,9 +149,9 @@ This performs an optimistic update.
This is done with: This is done with:
```text ```sh
curl -X PUT -d <Updated Lock Body> http://localhost:8500/v1/kv/<lock>?cas=<lock-modify-index> curl -X PUT -d <Updated Lock Body> http://localhost:8500/v1/kv/<lock>?cas=<lock-modify-index>
``` ```
`lock-modify-index` is the latest `ModifyIndex` value known for `<lock>`, 901 in this example. `lock-modify-index` is the latest `ModifyIndex` value known for `<lock>`, 901 in this example.
If this request succeeds with `true`, the contender now holds a slot in the semaphore. If this request succeeds with `true`, the contender now holds a slot in the semaphore.
@ -161,7 +161,7 @@ To re-attempt the acquisition, we watch for changes on `<prefix>`. This is becau
may be released, a node may fail, etc. Watching for changes is done via a blocking query may be released, a node may fail, etc. Watching for changes is done via a blocking query
against `/kv/<prefix>?recurse`. against `/kv/<prefix>?recurse`.
Slot holders **must** continously watch for changes to `<prefix>` since their slot can be Slot holders **must** continuously watch for changes to `<prefix>` since their slot can be
released by an operator or automatically released due to a false positive in the failure detector. released by an operator or automatically released due to a false positive in the failure detector.
On changes to `<prefix>` the locks `Holders` list must be re-checked to ensure the slot On changes to `<prefix>` the locks `Holders` list must be re-checked to ensure the slot
is still held. Additionally, if the watch fails to connect the slot should be considered lost. is still held. Additionally, if the watch fails to connect the slot should be considered lost.
@ -172,3 +172,7 @@ that a slot is held before (and during) execution of some critical operation.
Lastly, if a slot holder ever wishes to release its slot voluntarily, it should be done by doing a Lastly, if a slot holder ever wishes to release its slot voluntarily, it should be done by doing a
Check-And-Set operation against `<lock>` to remove its session from the `Holders` object. Check-And-Set operation against `<lock>` to remove its session from the `Holders` object.
Once that is done, both its contender key `<prefix>/<session>` and session should be deleted. Once that is done, both its contender key `<prefix>/<session>` and session should be deleted.
## Summary
In this guide we created a distributed semaphore using Consul KV and Consul sessions. We also learned how to manage the newly created semaphore.