During development a HTTP request will pause for 1 minute ONLY when an
?index is set. This gives a realistic emulation of blocking queries.
During testing we can change this latency when we are testing blocking
queries, which we do in numerous places.
A problem can arise during testing on a very slow machine.
If you are not testing blocking queries and therefore not set a latency
to test with, if the machine you are testing on is slow enough a normal
page can assert during a test, yet not tear down before a further
blocking query request is made. This blocking query then uses the default
latency which cause the page to hang for 1 minute, which in turn causes
the test to timeout.
This only seems to happen on a very slow system, but it does potentially
explain why we occasionally see the odd flakey test popping up.
* k8s > ambassador integration moved to k8s > service mesh > ambassador integration
* k8s > get started > overview moved to k8s > get started > install with
helm chart
* k8s > helm chart reference renamed to helm chart configuration
* Unignore any bin files underneath the UI folder
* Add previously ignored node exec script
* Rearrange steps file so we can continue to list steps out
secondaryIntermediateCertRenewalWatch was using `retryLoopBackoff` to
renew the intermediate certificate. Once it entered the inner loop and
started `retryLoopBackoff` it would never leave that.
`retryLoopBackoffAbortOnSuccess` will return when renewing is
successful, like it was intended originally.
The nodeCheck slice was being used as the first arg in append, which in some cases will modify the array backing the slice. This would lead to service checks for other services in the wrong event.
Also refactor some things to reduce the arguments to functions.
Creating a new readTxn does not work because it will not see the newly created objects that are about to be committed. Instead use the active write Txn.
Whenever an upsert/deletion of a config entry happens, within the open
state store transaction we speculatively test compile all discovery
chains that may be affected by the pending modification to verify that
the write would not create an erroneous scenario (such as splitting
traffic to a subset that did not exist).
If a single discovery chain evaluation references two config entries
with the same kind and name in different namespaces then sometimes the
upsert/deletion would be falsely rejected. It does not appear as though
this bug would've let invalid writes through to the state store so the
correction does not require a cleanup phase.
This commit refactors the state store usage code to track unique service
name changes on transaction commit. This means we only need to lookup
usage entries when reading the information, as opposed to iterating over
a large number of service indices.
- Take into account a service instance's name being changed
- Do not iterate through entire list of service instances, we only care
about whether there is 0, 1, or more than 1.