* Plumb Datacenter and Namespace to metrics provider in preparation for them being usable.
* Move metrics loader/status to a new component and show reason for being disabled.
* Remove stray console.log
* Rebuild AssetFS to resolve conflicts
* Yarn upgrade
* mend
Previously config entries sharing a kind & name but in different
namespaces could occasionally cause "stuck states" in replication
because the namespace fields were ignored during the differential
comparison phase.
Example:
Two config entries written to the primary:
kind=A,name=web,namespace=bar
kind=A,name=web,namespace=foo
Under the covers these both get saved to memdb, so they are sorted by
all 3 components (kind,name,namespace) during natural iteration. This
means that before the replication code does it's own incomplete sort,
the underlying data IS sorted by namespace ascending (bar comes before
foo).
After one pass of replication the primary and secondary datacenters have
the same set of config entries present. If
"kind=A,name=web,namespace=bar" were to be deleted, then things get
weird. Before replication the two sides look like:
primary: [
kind=A,name=web,namespace=foo
]
secondary: [
kind=A,name=web,namespace=bar
kind=A,name=web,namespace=foo
]
The differential comparison phase walks these two lists in sorted order
and first compares "kind=A,name=web,namespace=foo" vs
"kind=A,name=web,namespace=bar" and falsely determines they are the SAME
and are thus cause an update of "kind=A,name=web,namespace=foo". Then it
compares "<nothing>" with "kind=A,name=web,namespace=foo" and falsely
determines that the latter should be DELETED.
During reconciliation the deletes are processed before updates, and so
for a brief moment in the secondary "kind=A,name=web,namespace=foo" is
erroneously deleted and then immediately restored.
Unfortunately after this replication phase the final state is identical
to the initial state, so when it loops around again (rate limited) it
repeats the same set of operations indefinitely.
* Prevent redirect to topology url and hide Topology tab if service has no proxies
* Remove unused computed function from topology model
* Fix up tests
* Remove use of Exists computed function
* Add tests for hiding topology tab
* Update Topology metrics dashboard and configuration links
* Fixup tests
* Remove Dashboard Link from settings page
* Removing use of settings Dashboard links
There is a delay between an intentions change being made, and it being
reflected in the Envoy runtime configuration. Now that the enforcement
happens inside of Envoy instead of over in the agent, our tests need to
explicitly wait until the xDS reconfiguration is complete before
attempting to assert intentions worked.
Also remove a few double retry loops.
This speeds up individual envoy integration test runs from ~23m to ~14m.
It's also a pre-req for possibly switching to doing the tests entirely within Go (no shell-outs).
* ui: Add the most basic workspace root in /ui
* We already have a LICENSE file in the repository root
* Change directory path in build scripts ui-v2 -> ui
* Make yarn install flags configurable from elsewhere
* Minimal workspace root makefile
* Call the new docker specific target
* Update yarn in the docker build image
* Reconfigure the netlify target and move to the higher makefile
* Move ui-v2 -> ui/packages/consul-ui
* Change repo root to refleect new folder structure
* Temporarily don't hoist consul-api-double
* Fixup CI configuration
* Fixup lint errors
* Fixup Netlify target
Consul's Connect CA documentation mentions future releases will
support a pluggable CA system. This sentence has existed in the docs
for over two years, however there are currently no plans to develop
this feature on the near-term roadmap.
This commit removes this sentence to avoid giving the impression that
this feature will be available in an upcoming release.