* add intermediate ca metric routine
* add Gauge config for intermediate cert
* Stop metrics routine when stopping leader
* add changelog entry
* updage changelog
Co-authored-by: Daniel Nephin <dnephin@hashicorp.com>
* use variables instead of a map
* go imports sort
* Add metrics for primary and secondary ca
* start metrics routine in the right DC
* add telemetry documentation
* update docs
* extract expiry fetching in a func
* merge metrics for primary and secondary into signing ca metric
Co-authored-by: Daniel Nephin <dnephin@hashicorp.com>
The default namespace, and the tokens default namespace (or its origin namespace) is slightly more complicated than other things we deal with in the UI, there's plenty of info/docs on this that I've added in this PR.
Previously:
When a namespace was not specified in the URL, we used to default to the default namespace. When you logged in using a token we automatically forward you the namespace URL that your token originates from, so you are then using the namespace for your token by default. You can of course then edit the URL to remove the namespace portion, or perhaps revisit the UI at the root path with you token already set. In these latter cases we would show you information from the default namespace. So if you had no namespace segment/portion in the URL, we would assume default, perform actions against the default namespace and highlight the default namespace in the namespace selector menu. If you wanted to perform actions in your tokens origin namespace you would have to manually select it from the namespace selector menu.
This PR:
Now, when you have no namespace segment/portion in the URL, we use the token's origin namespace instead (and if you don't have a token, we then use the default namespace like it was previously)
Notes/thoughts:
I originally thought we were showing an incorrectly selected namespace in the namespace selector, but it also matched up with what we were doing with the API, so it was in fact correct. The issue was more that we weren't selecting the origin namespace of the token for the user when a namespace segment was omitted from the URL. Seeing as we automatically forward you to the tokens origin namespace when you log in, and we were correctly showing the namespace we were acting on when you had no namespace segment in the URL (in the previous case default), I'm not entirely sure how much of an issue this actually was.
This characteristic of namespace+token+namespace is a little weird and its easy to miss a subtlety or two so I tried to add some documentation in here for future me/someone else (including some in depth code comment around one of the API endpoints where this is very subtle and very hard to miss). I'm not the greatest at words, so would be great to get some edits there if it doesn't seem clear to folks.
The fact that we used to save your previous datacenter and namespace into local storage for reasons also meant the interaction here was slightly more complicated than it needed to be, so whilst we were here we rejigged things slightly to satisfy said reasons still but not use local storage (we try and grab the info from higher up). A lot of the related code here is from before we had our Routlets which I think could probably make all of this a lot less complicated, but I didn't want to do a wholesale replacement in this PR, we can save that for a separate PR on its own at some point.
* trim carriage return from certificates when inserting rootCA in the inMemDB
* format rootCA properly when returning the CA on the connect CA endpoint
* Fix linter warnings
* Fix providers to trim certs before returning it
* trim newlines on write when possible
* add changelog
* make sure all provider return a trailing newline after the root and intermediate certs
* Fix endpoint to return trailing new line
* Fix failing test with vault provider
* make test more robust
* make sure all provider return a trailing newline after the leaf certs
* Check for suffix before removing newline and use function
* Add comment to consul provider
* Update change log
Co-authored-by: R.B. Boyer <4903+rboyer@users.noreply.github.com>
* fix typo
* simplify code callflow
Co-authored-by: R.B. Boyer <4903+rboyer@users.noreply.github.com>
* extract requireNewLine as shared func
* remove dependency to testify in testing file
* remove extra newline in vault provider
* Add cert newline fix to envoy xds
* remove new line from mock provider
* Remove adding a new line from provider and fix it when the cert is read
* Add a comment to explain the fix
* Add missing for leaf certs
* fix missing new line
* fix missing new line in leaf certs
* remove extra new line in test
* updage changelog
Co-authored-by: Daniel Nephin <dnephin@hashicorp.com>
* fix in vault provider and when reading cache (RPC call)
* fix AWS provider
* fix failing test in the provider
* remove comments and empty lines
* add check for empty cert in test
* fix linter warnings
* add new line for leaf and private key
* use string concat instead of Sprintf
* fix new lines for leaf signing
* preallocate slice and remove append
* Add new line to `SignIntermediate` and `CrossSignCA`
Co-authored-by: R.B. Boyer <4903+rboyer@users.noreply.github.com>
Co-authored-by: Daniel Nephin <dnephin@hashicorp.com>
sync/atomic must be used with 64-bit aligned fields, and that alignment is difficult to
ensure unless the field is the first one in the struct.
https://golang.org/pkg/sync/atomic/#pkg-note-BUG.
As part of this change, we ensure that the SAN extensions are marked as
critical when the subject is empty so that AWS PCA tolerates the loss of
common names well and continues to function as a Connect CA provider.
Parts of this currently hack around a bug in crypto/x509 and can be
removed after https://go-review.googlesource.com/c/go/+/329129 lands in
a Go release.
Note: the AWS PCA tests do not run automatically, but the following
passed locally for me:
ENABLE_AWS_PCA_TESTS=1 go test ./agent/connect/ca -run TestAWS
* return an invalid record when asked for an addr dns with type other then A and AAAA
* add changelog
* fix ANY use case and add a test for it
* update changelog type
Co-authored-by: Daniel Nephin <dnephin@hashicorp.com>
* return empty response if the question record type do not match for addr
* set comment in the right place
* return A\AAAA record in extra section if record type is not A\AAAA for addr
* Fix failing test
* remove commented code
Co-authored-by: Daniel Nephin <dnephin@hashicorp.com>
* use require for test validation
* use variable to init struct
* fix failing test
* Update agent/dns.go
Co-authored-by: Daniel Nephin <dnephin@hashicorp.com>
* Update .changelog/10401.txt
Co-authored-by: Daniel Nephin <dnephin@hashicorp.com>
* Update agent/dns.go
Co-authored-by: Daniel Nephin <dnephin@hashicorp.com>
* Update agent/dns.go
Co-authored-by: Daniel Nephin <dnephin@hashicorp.com>
* Update agent/dns.go
Co-authored-by: Daniel Nephin <dnephin@hashicorp.com>
* fix compilation error
Co-authored-by: Daniel Nephin <dnephin@hashicorp.com>
* ui: Move all our icons to use CSS custom properties
The good thing about SASS vars is, if you don't use them they get removed from the final build. Whereas with CSS we have no tree-shaking to get rid of unused CSS custom properties. We can mostly work around this and for some things like colors its no big deal if we have some hex-codes in the build that we don't use as hex-codes are relatively small.
We've been slowly but surely moving all of our colors (and other things) to use CSS custom properties instead of SASS vars now that we have them available.
This commit makes use of the 'tree-shaking' abilities of @extend to ensure that we only compile in the icons that we use.
This commit is mostly churn-less as we already use @extend for the majority of our icons, so generally there is zero change here for working on the UI, but I did spot one single place where we were using SASS vars instead of @extend. This now uses the new form (second commit)
Interestingly this reduces our CSS payload by ~2kb to ~53kb (around 25kb of that is these icons)
* remove flush for each write to http response in the agent monitor endpoint
* fix race condition when we stop and start monitor multiple times, the doneCh is closed and never recover.
* start log reading goroutine before adding the sink to avoid filling the log channel before getting a chance of reading from it
* flush every 500ms to optimize log writing in the http server side.
* add changelog file
* add issue url to changelog
* fix changelog url
* Update changelog
Co-authored-by: Daniel Nephin <dnephin@hashicorp.com>
* use ticker to flush and avoid race condition when flushing in a different goroutine
* stop the ticker when done
Co-authored-by: Daniel Nephin <dnephin@hashicorp.com>
* Revert "fix race condition when we stop and start monitor multiple times, the doneCh is closed and never recover."
This reverts commit 1eeddf7a
* wait for log consumer loop to start before registering the sink
Co-authored-by: Daniel Nephin <dnephin@hashicorp.com>
Updates to a cluster will clear the associated endpoints, and updates to
a listener will clear the associated routes. Update the incremental xDS
logic to account for this implicit cleanup so that we can finish warming
the clusters and listeners.
Fixes#10379
Previously we would return an error if duplicate paths were specified.
This could lead to problems in cases where a user has the same path,
say /healthz, on two different ports.
This validation was added to signal a potential misconfiguration.
Instead we will only check for duplicate listener ports, since that is
what would lead to ambiguity issues when generating xDS config.
In the future we could look into using a single listener and creating
distinct filter chains for each path/port.
* debug: remove the CLI check for debug_enabled
The API allows collecting profiles even debug_enabled=false as long as
ACLs are enabled. Remove this check from the CLI so that users do not
need to set debug_enabled=true for no reason.
Also:
- fix the API client to return errors on non-200 status codes for debug
endpoints
- improve the failure messages when pprof data can not be collected
Co-Authored-By: Dhia Ayachi <dhia@hashicorp.com>
* remove parallel test runs
parallel runs create a race condition that fail the debug tests
* snapshot the timestamp at the beginning of the capture
- timestamp used to create the capture sub folder is snapshot only at the beginning of the capture and reused for subsequent captures
- capture append to the file if it already exist
* Revert "snapshot the timestamp at the beginning of the capture"
This reverts commit c2d03346
* Refactor captureDynamic to extract capture logic for each item in a different func
* snapshot the timestamp at the beginning of the capture
- timestamp used to create the capture sub folder is snapshot only at the beginning of the capture and reused for subsequent captures
- capture append to the file if it already exist
* Revert "snapshot the timestamp at the beginning of the capture"
This reverts commit c2d03346
* Refactor captureDynamic to extract capture logic for each item in a different func
* extract wait group outside the go routine to avoid a race condition
* capture pprof in a separate go routine
* perform a single capture for pprof data for the whole duration
* add missing vendor dependency
* add a change log and fix documentation to reflect the change
* create function for timestamp dir creation and simplify error handling
* use error groups and ticker to simplify interval capture loop
* Logs, profile and traces are captured for the full duration. Metrics, Heap and Go routines are captured every interval
* refactor Logs capture routine and add log capture specific test
* improve error reporting when log test fail
* change test duration to 1s
* make time parsing in log line more robust
* refactor log time format in a const
* test on log line empty the earliest possible and return
Co-authored-by: Freddy <freddygv@users.noreply.github.com>
* rename function to captureShortLived
* more specific changelog
Co-authored-by: Paul Banks <banks@banksco.de>
* update documentation to reflect current implementation
* add test for behavior when invalid param is passed to the command
* fix argument line in test
* a more detailed description of the new behaviour
Co-authored-by: Paul Banks <banks@banksco.de>
* print success right after the capture is done
* remove an unnecessary error check
Co-authored-by: Daniel Nephin <dnephin@hashicorp.com>
* upgraded github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57 => v0.0.0-20210601050228-01bbb1931b22
Co-authored-by: Daniel Nephin <dnephin@hashicorp.com>
Co-authored-by: Freddy <freddygv@users.noreply.github.com>
Co-authored-by: Paul Banks <banks@banksco.de>
This PR adds cluster members to the metrics API. The number of members per
segment are reported as well as the total number of members.
Tested by running a multi-node cluster locally and ensuring the numbers were
correct. Also added unit test coverage to add the new expected gauges to
existing test cases.
Normally the named pipe would buffer up to 64k, but in some cases when a
soft limit is reached, they will start only buffering up to 4k.
In either case, we should not deadlock.
This commit changes the pipe-bootstrap command to first buffer all of
stdin into the process, before trying to write it to the named pipe.
This allows the process memory to act as the buffer, instead of the
named pipe.
Also changed the order of operations in `makeBootstrapPipe`. The new
test added in this PR showed that simply buffering in the process memory
was not enough to fix the issue. We also need to ensure that the
`pipe-bootstrap` process is started before we try to write to its
stdin. Otherwise the write will still block.
Also set stdout/stderr on the subprocess, so that any errors are visible
to the user.
* debug: remove the CLI check for debug_enabled
The API allows collecting profiles even debug_enabled=false as long as
ACLs are enabled. Remove this check from the CLI so that users do not
need to set debug_enabled=true for no reason.
Also:
- fix the API client to return errors on non-200 status codes for debug
endpoints
- improve the failure messages when pprof data can not be collected
Co-Authored-By: Dhia Ayachi <dhia@hashicorp.com>
* remove parallel test runs
parallel runs create a race condition that fail the debug tests
* Add changelog
Co-authored-by: Daniel Nephin <dnephin@hashicorp.com>
* Create and use collapsible notices
* Refactor collapsible-notices
* Split up the topology acceptance tests
* Add acceptance tests for tproxy notices
* Add component file
* Adds additional TProxy notices tests
* Adds conditional to only show collapsable if more than 2 notices are present
* Adds changelog
* Refactorting the conditonal for collapsing the notices
* Renaming undefinedIntention to be notDefinedIntention
* Refactor tests
The bulk of this commit is moving the LeaderRoutineManager from the agent/consul package into its own package: lib/gort. It also got a renaming and its Start method now requires a context. Requiring that context required updating a whole bunch of other places in the code.
The prior solution to call reply.Reset() aged poorly since newer fields
were added to the reply, but not added to Reset() leading serial
blocking query loops on the server to blend replies.
This could manifest as a service-defaults protocol change from
default=>http not reverting back to default after the config entry
reponsible was deleted.
When the Consul serf health check is failing, this means that the health checks registered with the agent may no longer be correct. Therefore we show a notice to the user when we detect that the serf health check is failing both for the health check listing for nodes and for service instances.
There were a few little things we fixed up whilst we were here:
- We use our @replace decorator to replace an empty Type with serf in the model.
- We noticed that ServiceTags can be null, so we replace that with an empty array.
- We added docs for both our Notice component and the Consul::HealthCheck::List component. Notice now defaults to @type=info.
* Save exposed HTTP or GRPC ports to the agent's store
* Add those the health checks API so we can retrieve them from the API
* Change redirect-traffic command to also exclude those ports from inbound traffic redirection when expose.checks is set to true.
* Add conditionals to Lock Session list items
* Add changelog
* Show ID in details if there is a name to go in title
* Add copy-button if ID is in the title
* Update TTL conditional
* Update .changelog/10121.txt
Co-authored-by: John Cowen <johncowen@users.noreply.github.com>
Co-authored-by: John Cowen <johncowen@users.noreply.github.com>
This fixes the spacing bug in nspaces only by only showing Description if the namespace has one, and removing the extra 2 pixel margin of dds for when dts aren't rendered/don't exist.
* ui: Add support for showing partial lists in ListCollection
* Add CSS for partial 'View more' button, and move all CSS to /components
* Enable partial view for intention permissions
* ui: Loader amends/improvements
1. Create a JS compatible template only 'glimmer' component so we can
use it with or without glimmer.
2. Add a set of `rose` colors.
3. Animate the brand loader to keep it centered when the side
navigation appears.
4. Tweak the color of Consul::Loader to use a 'rose' color.
5. Move everything loader related to the `app/components/` folder and
add docs.
A recent change in 1.9.x inverted the order of these two lines, which caused the
X-Consul-Effective-Consistency header to be missing for the servie health endpoints
* ui: Fix text search for upstream instances
* Clean up predicates for other model types
* Add some docs around DataCollection and searching
* Enable UI Engineering Docs for our preview sites
* Use debug CSS in dev and staging
* WIP reloadable raft config
* Pre-define new raft gauges
* Update go-metrics to change gauge reset behaviour
* Update raft to pull in new metric and reloadable config
* Add snapshot persistance timing and installSnapshot to our 'protected' list as they can be infrequent but are important
* Update telemetry docs
* Update config and telemetry docs
* Add note to oldestLogAge on when it is visible
* Add changelog entry
* Update website/content/docs/agent/options.mdx
Co-authored-by: Matt Keeler <mkeeler@users.noreply.github.com>
Co-authored-by: Matt Keeler <mkeeler@users.noreply.github.com>
* Give descriptive error if auth method not found
Previously during a `consul login -method=blah`, if the auth method was not found, the
error returned would be "ACL not found". This is potentially confusing
because there may be many different ACLs involved in a login: the ACL of
the Consul client, perhaps the binding rule or the auth method.
Now the error will be "auth method blah not found", which is much easier
to debug.
Initially we were loading every potential upstream address into Envoy
and then routing traffic to the logical upstream service. The downside
of this behavior is that traffic meant to go to a specific instance
would be load balanced across ALL instances.
Traffic to specific instance IPs should be forwarded to the original
destination and if it's a destination in the mesh then we should ensure
the appropriate certificates are used.
This PR makes transparent proxying a Kubernetes-only feature for now
since support for other environments requires generating virtual IPs,
and Consul does not do that at the moment.
* Update header logo and inline icon
* Update full logos + layout on loading screen
* Update favicon assets and strategy
- Switches to serve an ico file alongside an SVG file
- Introduces an apple-touch-icon
* Removes unused favicon/meta assets
* Changelog item for ui
* Create component for logo
* Simplify logo component, set brand color
* Fix docs loading state CSS issue
The only thing that needed fixing up pertained to this section of the 1.18.x release notes:
> grpc_stats: the default value for stats_for_all_methods is switched from true to false, in order to avoid possible memory exhaustion due to an untrusted downstream sending a large number of unique method names. The previous default value was deprecated in version 1.14.0. This only changes the behavior when the value is not set. The previous behavior can be used by setting the value to true. This behavior change by be overridden by setting runtime feature envoy.deprecated_features.grpc_stats_filter_enable_stats_for_all_methods_by_default.
For now to maintain status-quo I'm explicitly setting `stats_for_all_methods=true` in all versions to avoid relying upon the default.
Additionally the naming of the emitted metrics for these gRPC requests changed slightly so the integration test assertions for `case-grpc` needed adjusting.
This ensures that if someone does include some extension Consul does not currently make use of, that extension is actually usable. Without linking these envoy protobufs into the main binary it can't round trip the escape hatches to send them down to envoy.
Whenenver the go-control-plane library is upgraded next we just have to re-run 'make envoy-library'.
This adds support for the Incremental xDS protocol when using xDS v3. This is best reviewed commit-by-commit and will not be squashed when merged.
Union of all commit messages follows to give an overarching summary:
xds: exclusively support incremental xDS when using xDS v3
Attempts to use SoTW via v3 will fail, much like attempts to use incremental via v2 will fail.
Work around a strange older envoy behavior involving empty CDS responses over incremental xDS.
xds: various cleanups and refactors that don't strictly concern the addition of incremental xDS support
Dissolve the connectionInfo struct in favor of per-connection ResourceGenerators instead.
Do a better job of ensuring the xds code uses a well configured logger that accurately describes the connected client.
xds: pull out checkStreamACLs method in advance of a later commit
xds: rewrite SoTW xDS protocol tests to use protobufs rather than hand-rolled json strings
In the test we very lightly reuse some of the more boring protobuf construction helper code that is also technically under test. The important thing of the protocol tests is testing the protocol. The actual inputs and outputs are largely already handled by the xds golden output tests now so these protocol tests don't have to do double-duty.
This also updates the SoTW protocol test to exclusively use xDS v2 which is the only variant of SoTW that will be supported in Consul 1.10.
xds: default xds.Server.AuthCheckFrequency at use-time instead of construction-time
* Use proxy outbound port from TransparentProxyConfig if provided
* If -proxy-id is provided to the redirect-traffic command, exclude any listener ports
from inbound traffic redirection. This includes envoy_prometheus_bind_addr,
envoy_stats_bind_addr, and the ListenerPort from the Expose configuration.
* Allow users to provide additional inbound and outbound ports, outbound CIDRs
and additional user IDs to be excluded from traffic redirection.
This affects both the traffic-redirect command and the iptables SDK package.
This config entry is being renamed primarily because in k8s the name
cluster could be confusing given that the config entry applies across
federated datacenters.
Additionally, this config entry will only apply to Consul as a service
mesh, so the more generic "cluster" name is not needed.
* CLI: Add support for reading internal raft snapshots to snapshot inspect
* Add snapshot inspect test for raw state files
* Add changelog entry
* Update .changelog/10089.txt
The extra argument meant that the blocking query configuration wasn't
being read properly, and therefore the correct ?index wasn't being sent
with the request.
Previously only a single auth method would be saved to the snapshot. This commit fixes the typo
and adds to the test, to show that all auth methods are now saved.
* add http2 ping checks
* fix test issue
* add h2ping check to config resources
* add new test and docs for h2ping
* fix grammatical inconsistency in H2PING documentation
* resolve rebase conflicts, add test for h2ping tls verification failure
* api documentation for h2ping
* update test config data with H2PING
* add H2PING to protocol buffers and update changelog
* fix typo in changelog entry
* Add new consul connect redirect-traffic command for applying traffic redirection rules when Transparent Proxy is enabled.
* Add new iptables package for applying traffic redirection rules with iptables.
* Fix bug in cache where TTLs are effectively ignored
This mostly affects streaming since streaming will immediately return from Fetch calls when the state is Closed on eviction which causes the race condition every time.
However this also affects all other cache types if the fetch call happens to return between the eviction and then next time around the Get loop by any client.
There is a separate bug that allows cache items to be evicted even when there are active clients which is the trigger here.
* Add changelog entry
* Update .changelog/9978.txt
The streaming cache type for service health has no way to handle v1/health/ingress/:service queries as there is no equivalent topic that would return the appropriate data.
Ensure that attempts to use this endpoint will use the old cache-type for now so that they return appropriate data when streaming is enabled.
* Allow passing ALPN next protocols down to connect services. Fixes#4466.
* Update connect/proxy/proxy_test.go
Co-authored-by: Paul Banks <banks@banksco.de>
Co-authored-by: Paul Banks <banks@banksco.de>
This PR adds support for setting QueryOptions on a few agent API
endpoints. Nomad needs to be able to set the Namespace field on
these endpoints to:
- query for services / checks in a namespace
- deregister services / checks in a namespace
- update TTL status on checks in a namespace
* Configure ember-auto-import so we can use a stricter CSP
* Create a fake filesystem using JSON to avoid inline scripts in index
We used to have inline scripts in index.html in order to support embers
filepath fingerprinting and our configurable rootURL.
Instead of using inline scripts we use application/json plus a JSON blob
to create a fake filesystem JSON blob/hash/map to hold all of the
rootURL'ed fingerprinted file paths which we can then retrive later in
non-inline scripts.
We move our inlined polyfills script into the init.js external script,
and we move the CodeMirror syntax highlighting configuration inline
script into the main app itself - into the already existing CodeMirror
initializer (this has been moved so we can lookup a service located
document using ember's DI container)
* Set a strict-ish CSP policy during development
AutopilotServerHealthy now handles the 429 status code
Previously we would error out and not parse the response. Now either a 200 or 429 status code are considered expected statuses and will result in the method returning the reply allowing API consumers to not only see if the system is healthy or not but which server is unhealthy.
This PR uses the excellent a11y-dialog to implement our modal functionality across the UI.
This package covers all our a11y needs - overlay click and ESC to close, controlling aria-* attributes, focus trap and restore. It's also very small (1.6kb) and has good DOM and JS APIs and also seems to be widely used and well tested.
There is one downside to using this, and that is:
We made use of a very handy characteristic of the relationship between HTML labels and inputs in order to implement our modals previously. Adding a for="id" attribute to a label meant you can control an <input id="id" /> from anywhere else in the page without having to pass javascript objects around. It's just based on using the same string for the for attribute and the id attribute. This allowed us to easily open our login dialog with CSS from anywhere within the UI without having to manage passing around a javascript object/function/method in order to open the dialog.
We've PRed #9813 which includes an approach which would make passing around JS modal object easier to do. But in the meantime we've added a little 'hack' here using an additional <input /> element and a change listener which allows us to keep this label/input characteristic of our old modals. I'd originally thought this would be a temporary amend in order to wait on #9813 but the more I think about it, the more I think its quite a nice thing to keep - so longer term we may/may not keep this.
Allows setting -prometheus-backend-port to configure the cluster
envoy_prometheus_bind_addr points to.
Allows setting -prometheus-scrape-path to configure which path
envoy_prometheus_bind_addr exposes metrics on.
-prometheus-backend-port is used by the consul-k8s metrics merging feature, to
configure envoy_prometheus_bind_addr to point to the merged metrics
endpoint that combines Envoy and service metrics so that one set of
annotations on a Pod can scrape metrics from the service and it's Envoy
sidecar.
-prometheus-scrape-path is used to allow configurability of the path
where prometheus metrics are exposed on envoy_prometheus_bind_addr.
Previous to this commit, the API response would include Gateway
Addresses in the form `domain.name.:8080`, which due to the addition of
the port is probably not the expected response.
This commit rightTrims any `.` characters from the end of the domain
before formatting the address to include the port resulting in
`domain.name:8080`
Note that this does NOT upgrade to xDS v3. That will come in a future PR.
Additionally:
- Ignored staticcheck warnings about how github.com/golang/protobuf is deprecated.
- Shuffled some agent/xds imports in advance of a later xDS v3 upgrade.
- Remove support for envoy 1.13.x but don't add in 1.17.x yet. We have to wait until the xDS v3 support is added in a follow-up PR.
Fixes#8425
When de-registering in anti-entropy sync, when there is no service or
check token.
The agent token will fall back to the default (aka user) token if no agent
token is set, so the existing behaviour still works, but it will prefer
the agent token over the user token if both are set.
ref: https://www.consul.io/docs/agent/options#acl_tokens
The agent token seems more approrpiate in this case, since this is an
"internal operation", not something initiated by the user.
This commit use the internal authorize endpoint along wiht ember-can to further restrict user access to certain UI features and navigational elements depending on the users ACL token
* A GET of the /acl/auth-method/:name endpoint returns the fields
MaxTokenTTL and TokenLocality, while a LIST (/acl/auth-methods) does
not.
The list command returns a filtered subset of the full set. This is
somewhat deliberate, so that secrets aren't shown, but the TTL and
Locality fields aren't (IMO) security critical, and it is useful for
the front end to be able to show them.
For consistency these changes mirror the 'omit empty' and string
representation choices made for the GET call.
This includes changes to the gRPC and API code in the client.
The new output looks similar to this
curl 'http://localhost:8500/v1/acl/auth-methods' | jq '.'
{
"MaxTokenTTL": "8m20s",
"Name": "minikube-ttl-local2",
"Type": "kubernetes",
"Description": "minikube auth method",
"TokenLocality": "local",
"CreateIndex": 530,
"ModifyIndex": 530,
"Namespace": "default"
}
]
Signed-off-by: Mark Anderson <manderson@hashicorp.com>
* Add changelog
Signed-off-by: Mark Anderson <manderson@hashicorp.com>
Previously a snapshot created as part of a resumse-stream request could have incorrectly
cached the newSnapshotToFollow event. This would cause clients to error because they
received an unexpected framing event.
This fixes an issue where leaf certificates issued in primary
datacenters using Vault as a Connect CA would be reissued very
frequently (every ~20 seconds) because the logic meant to detect root
rotation was errantly triggering.
The hash of the rootCA was being compared against a hash of the
intermediateCA and always failing. This doesn't apply to the Consul
built-in CA provider because there is no intermediate in use in the
primary DC.
This is reminiscent of #6513
In a situation where the mesh gateway is configured to bind to multiple
network interfaces, we use a feature called 'tagged addresses'.
Sometimes an address is duplicated across multiple tags such as 'lan'
and 'lan_ipv4'.
There is code to deduplicate these things when creating envoy listeners,
but that code doesn't ensure that the same tag wins every time. If the
winning tag flaps between xDS discovery requests it will cause the
listener to be drained and replaced.
* CSS for moving from a horizontal main menu to a side/vertical one
* Add <App /> Component and rearrange <HashcorpConsul /> to use it
1. HashicorpConsul now uses <App />
2. <App /> is now translated and adds 'skip to main content' functionality
3. Adds ember-in-viewport addon in order to visibly hide main navigation
items in order to take them out of focus/tabbing
4. Slight amends to the dom service while I was there
This way we only have to wait for the serf barrier to pass once before
we can make use of federation state APIs Without this patch every
restart needs to re-compute the change.
Adds a 'status' for the filtering/searching in the UI, without this its not super clear that you are filtering a recordset due to the menu selections being hidden once closed. You can also use the pills in this status view to delete individual filters.
* Add templating to inject JSON into an application/json script tag
Plus an external script in order to pick it out and inject the values we
need injecting into ember's environment meta tag.
The UI still uses env style naming (CONSUL_*) but we uses the new style
JSON/golang props behind the scenes.
Co-authored-by: Paul Banks <banks@banksco.de>
After fixing that bug I uncovered a couple more:
Fix an issue where we might try to cross sign a cert when we never had a valid root.
Fix a potential issue where reconfiguring the CA could cause either the Vault or AWS PCA CA providers to delete resources that are still required by the new incarnation of the CA.
* ui: Keep track of existing intentions and use those to save changes
Previously we risked overwriting existing data in an intention if we
tried to save an intention without having loaded it first, for example
Description and Metadata would have been overwritten.
This change loads in all the intentions for an origin service so we can
pick off the one we need to save and change to ensure that we don't
overwrite any existing data.
The field was not being included in the cache info key. This would result in a DNS request for
web.service.consul returning the same result as web.ingress.consul, when those results should
not be the same.
* Fix bug in usage metrics that caused a negative count to occur
There were a couple of instances were usage metrics would do the wrong
thing and result in incorrect counts, causing the count to attempt to
decrement below zero and return an error. The usage metrics did not
account for various places where a single transaction could
delete/update/add multiple service instances at once.
We also remove the error when attempting to decrement below zero, and
instead just make sure we do not accidentally underflow the unsigned
integer. This is a more graceful failure than returning an error and not
allowing a transaction to commit.
* Add changelog
This PR is based on the previous work by @snuggie12 in PR #6825. It adds the command consul intention list to list all available intentions. The list functionality for intentions seems a bit overdue as it's just very handy. The web UI cannot list intentions outside of the default namespace, and using the API is sometimes not the friendliest option. ;)
I cherry picked snuggie12's commits who did most of the heavy lifting (thanks again @snuggie12 for your great work!). The changes in the original commit mostly still worked on the current HEAD. On top of that I added support for namespaces and fixed the docs as they are managed differently today. Also the requested changes related to the "Connect" references in the original PRs have been addressed.
Fixes#5652
Co-authored-by: Matt Hoey <mhoey05@jcu.edu>
* Display a warning when rpc.enable_streaming = true is set on a client
This option has no effect when running as an agent
* Added warning when server starts with use_streaming_backend but without rpc.enable_streaming
* Added unit test
This way we only have to wait for the serf barrier to pass once before
we can upgrade to v2 acls. Without this patch every restart needs to
re-compute the change, and potentially if a stray older node joins after
a migration it might regress back to v1 mode which would be problematic.
This PR adds the ns=* query parameter when namespaces are enabled to keep backwards compatibility with how the UI used to work (Intentions page always lists all intention across all namespace you have access to)
I found a tiny dev bug for printing out the current URL during acceptance testing and fixed that up while I was there.
Nodes themselves are not namespaced, so we'd originally assumed we did not need to pass through the ns query parameter when listing or viewing nodes.
As it turns out the API endpoints we use to list and view nodes (and related things) return things that are namespaced, therefore any API requests for nodes do require a the ns query parameter to be passed through to the request.
This PR adds the necessary ns query param to all things Node, apart from the querying for the leader which only returns node related information.
Additionally here we decided to show 0 Services text in the node listing if there are nodes with no service instances within the namespace you are viewing, as this is clearer than showing nothing at all. We also cleaned up/standardized the text we use to in the empty state for service instances.
Previously the tokens would fail to insert into the secondary's state
store because the AuthMethod field of the ACLToken did not point to a
known auth method from the primary.
* server: fix panic when deleting a non existent intention
* add changelog
* Always return an error when deleting non-existent ixn
Co-authored-by: freddygv <gh@freddygv.xyz>
A vulnerability was identified in Consul and Consul Enterprise (“Consul”) such that operators with `operator:read` ACL permissions are able to read the Consul Connect CA configuration when explicitly configured with the `/v1/connect/ca/configuration` endpoint, including the private key. This allows the user to effectively privilege escalate by enabling the ability to mint certificates for any Consul Connect services. This would potentially allow them to masquerade (receive/send traffic) as any service in the mesh.
--
This PR increases the permissions required to read the Connect CA's private key when it was configured via the `/connect/ca/configuration` endpoint. They are now `operator:write`.
This PR updates the tags that we generate for Envoy stats.
Several of these come with breaking changes, since we can't keep two stats prefixes for a filter.
The Intention.Apply RPC is quite large, so this PR attempts to break it down into smaller functions and dissolves the pre-config-entry approach to the breakdown as it only confused things.
Header is: X-Consul-Default-ACL-Policy=<allow|deny>
This is of particular utility when fetching matching intentions, as the
fallthrough for a request that doesn't match any intentions is to
enforce using the default acl policy.
The Catalog, Config Entry, KV and Session resources potentially re-validate the input as its coming in. We need to prevent snapshot restoration failures due to missing namespaces or namespaces that are being deleted in enterprise.
Previously config entries sharing a kind & name but in different
namespaces could occasionally cause "stuck states" in replication
because the namespace fields were ignored during the differential
comparison phase.
Example:
Two config entries written to the primary:
kind=A,name=web,namespace=bar
kind=A,name=web,namespace=foo
Under the covers these both get saved to memdb, so they are sorted by
all 3 components (kind,name,namespace) during natural iteration. This
means that before the replication code does it's own incomplete sort,
the underlying data IS sorted by namespace ascending (bar comes before
foo).
After one pass of replication the primary and secondary datacenters have
the same set of config entries present. If
"kind=A,name=web,namespace=bar" were to be deleted, then things get
weird. Before replication the two sides look like:
primary: [
kind=A,name=web,namespace=foo
]
secondary: [
kind=A,name=web,namespace=bar
kind=A,name=web,namespace=foo
]
The differential comparison phase walks these two lists in sorted order
and first compares "kind=A,name=web,namespace=foo" vs
"kind=A,name=web,namespace=bar" and falsely determines they are the SAME
and are thus cause an update of "kind=A,name=web,namespace=foo". Then it
compares "<nothing>" with "kind=A,name=web,namespace=foo" and falsely
determines that the latter should be DELETED.
During reconciliation the deletes are processed before updates, and so
for a brief moment in the secondary "kind=A,name=web,namespace=foo" is
erroneously deleted and then immediately restored.
Unfortunately after this replication phase the final state is identical
to the initial state, so when it loops around again (rate limited) it
repeats the same set of operations indefinitely.
When a service is deregistered, we check whever matching services were
registered as sidecar along with it and deregister them as well.
To determine if a service is indeed a sidecar we check the
structs.ServiceNode.LocallyRegisteredAsSidecar property. However, to
avoid interal API leakage, it is excluded from JSON serialization,
meaning it is not persisted to disk either.
When the agent is restarted, this property lost and sidecars are no
longer deregistered along with their parent service.
To fix this, we now specifically save this property in the persisted
service file.
* Create 8858.txt
* add separate changelog entries for original topology impl and intentions
Co-authored-by: Mike Morris <mikemorris@users.noreply.github.com>
* Consul Service meta wrongly computes and exposes non_voter meta
In Serf Tags, entreprise members being non-voters use the tag
`nonvoter=1`, not `non_voter = false`, so non-voters in members
were wrongly displayed as voter.
Demonstration:
```
consul members -detailed|grep voter
consul20-hk5 10.200.100.110:8301 alive acls=1,build=1.8.4+ent,dc=hk5,expect=3,ft_fs=1,ft_ns=1,id=xxxxxxxx-5629-08f2-3a79-10a1ab3849d5,nonvoter=1,port=8300,raft_vsn=3,role=consul,segment=<all>,use_tls=1,vsn=2,vsn_max=3,vsn_min=2,wan_join_port=8302
```
* Added changelog
* Added changelog entry
This is the recommended proxy integration API for listing intentions
which should not require an active connection to the servers to resolve
after the initial cache filling.
This allows for client agent to be run in a more stateless manner where they may be abruptly terminated and not expected to come back. If advertising a per-agent reconnect timeout using the advertise_reconnect_timeout configuration when that agent leaves, other agents will wait only that amount of time for the agent to come back before reaping it.
This has the advantageous side effect of causing servers to deregister the node/services/checks for that agent sooner than if the global reconnect_timeout was used.
- Upgrade the ConfigEntry.ListAll RPC to be kind-aware so that older
copies of consul will not see new config entries it doesn't understand
replicate down.
- Add shim conversion code so that the old API/CLI method of interacting
with intentions will continue to work so long as none of these are
edited via config entry endpoints. Almost all of the read-only APIs will
continue to function indefinitely.
- Add new APIs that operate on individual intentions without IDs so that
the UI doesn't need to implement CAS operations.
- Add a new serf feature flag indicating support for
intentions-as-config-entries.
- The old line-item intentions way of interacting with the state store
will transparently flip between the legacy memdb table and the config
entry representations so that readers will never see a hiccup during
migration where the results are incomplete. It uses a piece of system
metadata to control the flip.
- The primary datacenter will begin migrating intentions into config
entries on startup once all servers in the datacenter are on a version
of Consul with the intentions-as-config-entries feature flag. When it is
complete the old state store representations will be cleared. We also
record a piece of system metadata indicating this has occurred. We use
this metadata to skip ALL of this code the next time the leader starts
up.
- The secondary datacenters continue to run the old intentions
replicator until all servers in the secondary DC and primary DC support
intentions-as-config-entries (via serf flag). Once this condition it met
the old intentions replicator ceases.
- The secondary datacenters replicate the new config entries as they are
migrated in the primary. When they detect that the primary has zeroed
it's old state store table it waits until all config entries up to that
point are replicated and then zeroes its own copy of the old state store
table. We also record a piece of system metadata indicating this has
occurred. We use this metadata to skip ALL of this code the next time
the leader starts up.
This adds a new very tiny memdb table and corresponding raft operation
for updating a very small effective map[string]string collection of
"system metadata". This can persistently record a fact about the Consul
state machine itself.
The first use of this feature will come in a later PR.
This really only matters for unit tests, since typically if an agent shuts down its server, it follows that up by exiting the process, which would also clean up all of the networking anyway.
Fixes#8755
Since I was updating the interface, i also added the missing `GetNamespace()`.
Depending upon how you look at it, this is a breaking change since it adds methods to the exported interface `api.ConfigEntry`. Given that you cannot define your own config entry kinds, and all of the machinery of the `api.Client` acts like a factory to construct the canned ones from the rest of the module, this feels like it's not a problematic change as it would only break someone who had reimplemented the `ConfigEntry` interface themselves for no apparent utility?
Whenever an upsert/deletion of a config entry happens, within the open
state store transaction we speculatively test compile all discovery
chains that may be affected by the pending modification to verify that
the write would not create an erroneous scenario (such as splitting
traffic to a subset that did not exist).
If a single discovery chain evaluation references two config entries
with the same kind and name in different namespaces then sometimes the
upsert/deletion would be falsely rejected. It does not appear as though
this bug would've let invalid writes through to the state store so the
correction does not require a cleanup phase.
Lots of constants were added for various tags that would concern users and are not already parsed out.
Additionally two methods on the AgentMember type were added to ask a member what its ACL Mode is and whether its a server or not.
During gossip encryption key rotation it would be nice to be able to see if all nodes are using the same key. This PR adds another field to the json response from `GET v1/operator/keyring` which lists the primary keys in use per dc. That way an operator can tell when a key was successfully setup as primary key.
Based on https://github.com/hashicorp/serf/pull/611 to add primary key to list keyring output:
```json
[
{
"WAN": true,
"Datacenter": "dc2",
"Segment": "",
"Keys": {
"0OuM4oC3Os18OblWiBbZUaHA7Hk+tNs/6nhNYtaNduM=": 6,
"SINm887hKTzmMWeBNKTJReaTLX3mBEJKriDyt88Ad+g=": 6
},
"PrimaryKeys": {
"SINm887hKTzmMWeBNKTJReaTLX3mBEJKriDyt88Ad+g=": 6
},
"NumNodes": 6
},
{
"WAN": false,
"Datacenter": "dc2",
"Segment": "",
"Keys": {
"0OuM4oC3Os18OblWiBbZUaHA7Hk+tNs/6nhNYtaNduM=": 8,
"SINm887hKTzmMWeBNKTJReaTLX3mBEJKriDyt88Ad+g=": 8
},
"PrimaryKeys": {
"SINm887hKTzmMWeBNKTJReaTLX3mBEJKriDyt88Ad+g=": 8
},
"NumNodes": 8
},
{
"WAN": false,
"Datacenter": "dc1",
"Segment": "",
"Keys": {
"0OuM4oC3Os18OblWiBbZUaHA7Hk+tNs/6nhNYtaNduM=": 3,
"SINm887hKTzmMWeBNKTJReaTLX3mBEJKriDyt88Ad+g=": 8
},
"PrimaryKeys": {
"SINm887hKTzmMWeBNKTJReaTLX3mBEJKriDyt88Ad+g=": 8
},
"NumNodes": 8
}
]
```
I intentionally did not change the CLI output because I didn't find a good way of displaying this information. There are a couple of options that we could implement later:
* add a flag to show the primary keys
* add a flag to show json output
Fixes#3393.
Issue and PR numbers do not overlap, they are based of the same counter.
A PR can be also linked to via issues, if it is a PR, Github will
redirect to it.
This change has the benefit that one can link to both - issues and PRs.