We noticed that the Service Instance listing on both Node and Service views where not taking into account proxy instance health. This fixes that up so that the small health check information in each Service Instance row includes the proxy instances health checks when displaying Service Instance health (afterall if the proxy instance is unhealthy then so is the service instance that it should be proxying)
* Refactor Consul::InstanceChecks with docs
* Add to-hash helper, which will return an object keyed by a prop
* Stop using/relying on ember-data type things, just use a hash lookup
* For the moment add an equivalent "just give me proxies" model prop
* Start stitching things together, this one requires an extra HTTP request
..previously we weren't even requesting proxies instances here
* Finish up the stitching
* Document Consul::ServiceInstance::List while I'm here
* Fix up navigation mocks Name > Service
Fixes#11876
This enforces that multiple xDS mutations are not issued on the same ADS connection at once, so that we can 100% control the order that they are applied. The original code made assumptions about the way multiple in-flight mutations were applied on the Envoy side that was incorrect.
When a wildcard xDS type (LDS/CDS/SRDS) reconnects from a delta xDS stream,
prior to envoy `1.19.0` it would populate the `ResourceNamesSubscribe` field
with the full list of currently subscribed items, instead of simply omitting it
to infer that it wanted everything (which is what wildcard mode means).
This upstream issue was filed in envoyproxy/envoy#16063 and fixed in
envoyproxy/envoy#16153 which went out in Envoy `1.19.0` and is fixed in later
versions (later refactored in envoyproxy/envoy#16855).
This PR conditionally forces LDS/CDS to be wildcard-only even when the
connected Envoy requests a non-wildcard subscription, but only does so on
versions prior to `1.19.0`, as we should not need to do this on later versions.
This fixes the failure case as described here: #11833 (comment)
Co-authored-by: Huan Wang <fredwanghuan@gmail.com>
- Adding a 'Partition' and 'RetryJoin' command allows test cases where
one would like to spin up a Consul Agent in a non-default partition to
test use-cases that are common when enabling Admin Partition on
Kubernetes.
The fix here is two fold:
- We shouldn't be providing the DataSource (which loads the data) with an id when we are creating from within a folder (in the buggy code we are providing the parentKey of the new KV you are creating)
- Being able to provide an empty id to the DataSource/KV repository and that repository responding with a newly created object is more towards the "new way of doing forms", therefore the corresponding code to return a newly created ember-data object. As we changed the actual bug in point 1 here, we need to make sure the repository responds with an empty object when the request id is empty.
* xds: refactor ingress listener SDS configuration
* xds: update resolveListenerSDS call args in listeners_test
* ingress: add TLS min, max and cipher suites to GatewayTLSConfig
* xds: implement envoyTLSVersions and envoyTLSCipherSuites
* xds: merge TLS config
* xds: configure TLS parameters with ingress TLS context from leaf
* xds: nil check in resolveListenerTLSConfig validation
* xds: nil check in makeTLSParameters* functions
* changelog: add entry for TLS params on ingress config entries
* xds: remove indirection for TLS params in TLSConfig structs
* xds: return tlsContext, nil instead of ambiguous err
Co-authored-by: Chris S. Kim <ckim@hashicorp.com>
* xds: switch zero checks to types.TLSVersionUnspecified
* ingress: add validation for ingress config entry TLS params
* ingress: validate listener TLS config
* xds: add basic ingress with TLS params tests
* xds: add ingress listeners mixed TLS min version defaults precedence test
* xds: add more explicit tests for ingress listeners inheriting gateway defaults
* xds: add test for single TLS listener on gateway without TLS defaults
* xds: regen golden files for TLSVersionInvalid zero value, add TLSVersionAuto listener test
* types/tls: change TLSVersion to string
* types/tls: update TLSCipherSuite to string type
* types/tls: implement validation functions for TLSVersion and TLSCipherSuites, make some maps private
* api: add TLS params to GatewayTLSConfig, add tests
* api: add TLSMinVersion to ingress gateway config entry test JSON
* xds: switch to Envoy TLS cipher suite encoding from types package
* xds: fixup validation for TLSv1_3 min version with cipher suites
* add some kitchen sink tests and add a missing struct tag
* xds: check if mergedCfg.TLSVersion is in TLSVersionsWithConfigurableCipherSuites
* xds: update connectTLSEnabled comment
* xds: remove unsued resolveGatewayServiceTLSConfig function
* xds: add makeCommonTLSContextFromLeafWithoutParams
* types/tls: add LessThan comparator function for concrete values
* types/tls: change tlsVersions validation map from string to TLSVersion keys
* types/tls: remove unused envoyTLSCipherSuites
* types/tls: enable chacha20 cipher suites for Consul agent
* types/tls: remove insecure cipher suites from allowed config
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 and TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 are both explicitly listed as insecure and disabled in the Go source.
Refs https://cs.opensource.google/go/go/+/refs/tags/go1.17.3:src/crypto/tls/cipher_suites.go;l=329-330
* types/tls: add ValidateConsulAgentCipherSuites function, make direct lookup map private
* types/tls: return all unmatched cipher suites in validation errors
* xds: check that Envoy API value matching TLS version is found when building TlsParameters
* types/tls: check that value is found in map before appending to slice in MarshalEnvoyTLSCipherSuiteStrings
* types/tls: cast to string rather than fmt.Printf in TLSCihperSuite.String()
* xds: add TLSVersionUnspecified to list of configurable cipher suites
* structs: update note about config entry warning
* xds: remove TLS min version cipher suite unconfigurable test placeholder
* types/tls: update tests to remove assumption about private map values
Co-authored-by: R.B. Boyer <rb@hashicorp.com>
* Make sure the mocks reflect the requested partition/namespace
* Ensure partition is passed through to the HTTP adapter
* Pass AuthMethod object through to TokenSource in order to use Partition
* Change up docs and add potential improvements for future
* Pass the query partition back onto the response
* Make sure the OIDC callback mock returns a Partition
* Enable OIDC provider mock overwriting during acceptance testing
* Make sure we can enable partitions and SSO post bootup only required
...for now
* Wire up oidc provider mocking
* Add SSO full auth flow acceptance tests
* ui: Don't even ask whether we are authorized for a KV...
...just let the actual API tell us in the response, thin-client style.
* Add some similar commenting for previous PRs related to this problem
* Add some less fake API data
* Rename the models class so as to not be confused with JS Proxies
* Rearrange routlets slightly and add some initial outletFor tests
* Move away from a MeshChecks computed property and just use a helper
* Just use ServiceChecks for healthiness filtering for the moment
* Make TProxy cookie configurable
* Amend exposed paths and upstreams so they know about meta AND proxy
* Slight bit of TaggedAddresses refactor while I was checking for `meta` etc
* Document CONSUL_TPROXY_ENABLE
We recently changed the intentions form to take a full model of a dc rather than just the string identifier (so {Name: 'dc', Primary: true} vs just 'dc' in order to know whether the DC is the primary or not.
Unfortunately, we only did this on the global intentions page not the per service intentions page. This makes it impossible to save an intention from the per service intention page (whilst you can still save intentions from the global intention page as normal).
The fix here pretty much copy/pastes the approach taken in the global intention edit template over to the per service intention edit template.
Tests have been added for creation in the per service intention section, which again are pretty much just copied from the global one, unfortunately this didn't exist previously which would have helped prevent this.
Update the `/agent/check/deregister/` API endpoint to return a 404
HTTP response code when an attempt is made to de-register a check ID
that does not exist on the agent.
This brings the behavior of /agent/check/deregister/ in line with the
behavior of /agent/service/deregister/ which was changed in #10632 to
similarly return a 404 when de-registering non-existent services.
Fixes#5821
* clone the service under lock to avoid a data race
* add change log
* create a struct and copy the pointer to mutate it to avoid a data race
* fix failing test
* revert added space
* add comments, to clarify the data race.
Error messages related to service and check operations previously included
the following substrings:
- service %q
- check %q
From this error message, it isn't clear that the expected field is the ID for
the entity, not the name. For example, if the user has a service named test,
the error message would read 'Unknown service "test"'. This is misleading -
a service with that *name* does exist, but not with that *ID*.
The substrings above have been modified to make it clear that ID is needed,
not name:
- service with ID %q
- check with ID %q
* ui: Add login button to per service intentions for zero results
* Add login button and consistent header for when you have zero nodes
* `services` doesn't exists use `items` consequently:
Previous to this fix we would not show a more tailored message for when
you empty result set was due to a user search rather than an empty
result set straight from the backend
* Fix `error` > `@error` in ErrorState plus code formatting and more docs
* Changelog
When a URL path is not found, return a non-empty message with the 404 status
code to help the user understand what went wrong. If the URL path was not
prefixed with '/v1/', suggest that may be the cause of the problem (which is a
common mistake).
* Update disco fixtures now we have partitions
* Add virtual-admin-6 fixture with partition 'redirects' and failovers
* Properly cope with extra partition segment for splitters and resolvers
* Make 'redirects' and failovers look/act consistently
* Fixup some unit tests
Fixes a bug whereby servers present in multiple network areas would be
properly segmented in the Router, but not in the gRPC mirror. This would
lead servers in the current datacenter leaving from a network area
(possibly during the network area's removal) from deleting their own
records that still exist in the standard WAN area.
The gRPC client stack uses the gRPC server tracker to execute all RPCs,
even those targeting members of the current datacenter (which is unlike
the net/rpc stack which has a bypass mechanism).
This would manifest as a gRPC method call never opening a socket because
it would block forever waiting for the current datacenter's pool of
servers to be non-empty.
types: add TLS constants
types: distinguish between human and Envoy serialization for TLSVersion constants
types: add DeprecatedAgentTLSVersions for backwards compatibility
types: add methods for printing TLSVersion as strings
types: add TLSVersionInvalid error value
types: add a basic test for TLSVersion comparison
types: add TLS cihper suite mapping using IANA constant names and values
types: adding ConsulAutoConfigTLSVersionStrings
changelog: add entry for TLSVersion and TLSCipherSuite types
types: initialize TLSVerison constants starting at zero
types: remove TLSVersionInvalid < 0 test
types: update note for ConsulAutoConfigTLSVersionStrings
types: programmatically invert TLSCipherSuites for HumanTLSCipherSuiteStrings lookup map
Co-authored-by: Dan Upton <daniel@floppy.co>
types: add test for TLSVersion zero-value
types: remove unused EnvoyTLSVersionStrings
types: implement MarshalJSON for TLSVersion
types: implement TLSVersionUnspecified as zero value
types: delegate TLS.MarshalJSON to json.Marshal, use ConsulConfigTLSVersionStrings as default String() values
Co-authored-by: Dan Upton <daniel@floppy.co>
The test added in this commit shows the problem. Previously the
SigningKeyID was set to the RootCert not the local leaf signing cert.
This same bug was fixed in two other places back in 2019, but this last one was
missed.
While fixing this bug I noticed I had the same few lines of code in 3
places, so I extracted a new function for them.
There would be 4 places, but currently the InitializeCA flow sets this
SigningKeyID in a different way, so I've left that alone for now.
We were not adding the local signing cert to the CARoot. This commit
fixes that bug, and also adds support for fixing existing CARoot on
upgrade.
Also update the tests for both primary and secondary to be more strict.
Check the SigningKeyID is correct after initialization and rotation.
This commit uses all our new ways of doing things to Lock Sessions and their interactions with KV and Nodes. This is mostly around are new under-the-hood things, but also I took the opportunity to upgrade some of the CSS to reuse some of our CSS utils that have been made over the past few months (%csv-list and %horizontal-kv-list).
Also added (and worked on existing) documentation for Lock Session related components.
- Moves where they appear up to the <App /> component.
- Instead of a <Notification /> wrapping component to move whatever you use for a notification up to where they need to appear (via ember-cli-flash), we now use a {{notification}} modifier now we have modifiers.
- Global notifications/flashes are no longer special styles of their own. You just use the {{notification}} modifier to hoist whatever component/element you want up to the top of the page. This means we can re-use our existing <Notice /> component for all our global UI notifications (this is the user visible change here)
* Upgrade AuthForm and document current state a little better
* Hoist SSO out of the AuthForm
* Bare minimum admin partitioned SSO
also:
ui: Tabbed Login with Token or SSO interface (#11619)
- I upgraded our super old, almost the first ember component I wrote, to use glimmer/almost template only. This should use slots/contextual components somehow, but thats a bigger upgrade so I didn't go that far.
- I've been wanting to upgrade the shape of our StateChart component for a very long while now, here its very apparent that it would be much better to do this sooner rather than later. I left it as is for now, but there will be a PR coming soon with a slight reshaping of this component.
- Added a did-upsert modifier which is a mix of did-insert/did-update
- Documentation added/amended for all the new things.
* Support vault auth methods for the Vault connect CA provider
* Rotate the token (re-authenticate to vault using auth method) when the token can no longer be renewed
For our dc, nspace and partition 'bucket' menus, sometimes when selecting one 'bucket' we need to reset a different 'bucket' back to the one that your token has by default (or the default if not). For example when switching to a different partition whilst you are in a non-default namespace of another partition, we need to switch you to the token default namespace of the partition you are switching to.
Fixes an issue where the code editor would not resizing to the full extent of the browser window plus CodeEditor restyling/refactoring
- :label named block
- :tools named block
- :content named block
- code and CSS cleanup
- CodeEditor.mdx
Signed-off-by: Alessandro De Blasis <alex@deblasis.net>
Co-authored-by: John Cowen <johncowen@users.noreply.github.com>
Most HTTP API calls will use the default namespace of the calling token to additionally filter/select the data used for the response if one is not specified by the frontend.
The internal permissions/authorize endpoint does not do this (you can ask for permissions from different namespaces in on request).
Therefore this PR adds the tokens default namespace in the frontend only to our calls to the authorize endpoint. I tried to do it in a place that made it feel like it's getting added in the backend, i.e. in a place which was least likely to ever require changing or thinking about.
Note: We are probably going to change this internal endpoint to also inspect the tokens default namespace on the backend. At which point we can revert this commit/PR.
* Add the same support for the tokens default partition
* command/redirect_traffic: add rules to redirect DNS to Consul. Currently uses a hack to get the consul dns service ip, and this hack only works when the service is deployed in the same namespace as consul.
* command/redirect_traffic: redirect DNS to Consul when -consul-dns-ip is passed in
* Add unit tests to Consul DNS IP table redirect rules
Co-authored-by: Ashwin Venkatesh <ashwin@hashicorp.com>
Co-authored-by: Iryna Shustava <ishustava@users.noreply.github.com>
Temporarily revert to pre-1.10 UI functionality by overwriting frontend
permissions. These are used to hide certain UI elements, but they are
still enforced on the backend.
This temporary measure should be removed again once https://github.com/hashicorp/consul/issues/11098
has been resolved
The duo of `makeUpstreamFilterChainForDiscoveryChain` and `makeListenerForDiscoveryChain` were really hard to reason about, and led to concealing a bug in their branching logic. There were several issues here:
- They tried to accomplish too much: determining filter name, cluster name, and whether RDS should be used.
- They embedded logic to handle significantly different kinds of upstream listeners (passthrough, prepared query, typical services, and catch-all)
- They needed to coalesce different data sources (Upstream and CompiledDiscoveryChain)
Rather than handling all of those tasks inside of these functions, this PR pulls out the RDS/clusterName/filterName logic.
This refactor also fixed a bug with the handling of [UpstreamDefaults](https://www.consul.io/docs/connect/config-entries/service-defaults#defaults). These defaults get stored as UpstreamConfig in the proxy snapshot with a DestinationName of "*", since they apply to all upstreams. However, this wildcard destination name must not be used when creating the name of the associated upstream cluster. The coalescing logic in the original functions here was in some situations creating clusters with a `*.` prefix, which is not a valid destination.
* ui: Filter global intentions list by namespace and partition
Filters global intention listing by the current partition rather than trying to use a wildcard.
Fixes an issue described in #10132, where if two DCs are WAN federated
over mesh gateways, and the gateway in the non-primary DC is terminated
and receives a new IP address (as is commonly the case when running them
on ephemeral compute instances) the primary DC is unable to re-establish
its connection until the agent running on its own gateway is restarted.
This was happening because we always preferred gateways discovered by
the `Internal.ServiceDump` RPC (which would fail because there's no way
to dial the remote DC) over those discovered in the federation state,
which is replicated as long as the primary DC's gateway is reachable.
* Support Vault Namespaces explicitly in CA config
If there is a Namespace entry included in the Vault CA configuration,
set it as the Vault Namespace on the Vault client
Currently the only way to support Vault namespaces in the Consul CA
config is by doing one of the following:
1) Set the VAULT_NAMESPACE environment variable which will be picked up
by the Vault API client
2) Prefix all Vault paths with the namespace
Neither of these are super pleasant. The first requires direct access
and modification to the Consul runtime environment. It's possible and
expected, not super pleasant.
The second requires more indepth knowledge of Vault and how it uses
Namespaces and could be confusing for anyone without that context. It
also infers that it is not supported
* Add changelog
* Remove fmt.Fprint calls
* Make comment clearer
* Add next consul version to website docs
* Add new test for default configuration
* go mod tidy
* Add skip if vault not present
* Tweak changelog text
* Remove some usage of md5 from the system
OSS side of https://github.com/hashicorp/consul-enterprise/pull/1253
This is a potential security issue because an attacker could conceivably manipulate inputs to cause persistence files to collide, effectively deleting the persistence file for one of the colliding elements.
Signed-off-by: Mark Anderson <manderson@hashicorp.com>
Port of: Ensure we check intention service prefix permissions for per service (#11270)
Previously, when showing some action buttons for 'per service intentions' we used a global 'can I do something with any intention' permission to decide whether to show a certain button or not. If a user has a token that does not have 'global' intention permissions, but does have intention permissions on one or more specific services (for example via service / service_prefix), this meant that we did not show them certain buttons required to create/edit the intentions for this specific service.
This PR adds that extra permissions check so we now check the intentions permissions per service instead of using the 'global' "can I edit intentions" question/request.
**Notes:**
- If a HTML button is `disabled` this means tippy.js doesn't adopt the
popover properly and subsequently hide it from the user, so aswell as
just disabling the button so you can't active the popover, we also don't
even put the popover on the page
- If `ability.item` or `ability.item.Resources` are empty then assume no access
**We don't try to disable service > right hand side intention actions here**
Whether you can create intentions for a service depends on the
_destination_ of the intention you would like to create. For the
topology view going from the LHS to the center, this is straightforwards
as we only need to know the permissions for the central service, as when
you are going from the LHS to the center, the center is the
_destination_.
When going from the center to the RHS the _destination[s]_ are on the
RHS. This means we need to know the permissions for potentially 1000s of
services all in one go in order to know when to show a button or not.
We can't realistically discover the permissions for service > RHS
services as we'd have either make a HTTP request per right hand service,
or potentially make an incredibly large POST request for all the
potentially 1000s of services on the right hand side (more preferable to
1000s of HTTP requests).
Therefore for the moment at least we keep the old functionality (thin client)
for the middle to RHS here. If you do go to click on the button and you
don't have permissions to update the intention you will still not be
able to update it, only you won't know this until you click the button
(at which point you'll get a UI visible 403 error)
Note: We reversed the conditional here between 1.10 and 1.11
So this make 100% sense that the port is different here to 1.11
* add root_cert_ttl option for consul connect, vault ca providers
Signed-off-by: FFMMM <FFMMM@users.noreply.github.com>
* Apply suggestions from code review
Co-authored-by: Chris S. Kim <ckim@hashicorp.com>
* add changelog, pr feedback
Signed-off-by: FFMMM <FFMMM@users.noreply.github.com>
* Update .changelog/11428.txt, more docs
Co-authored-by: Daniel Nephin <dnephin@hashicorp.com>
* Update website/content/docs/agent/options.mdx
Co-authored-by: Kyle Havlovitz <kylehav@gmail.com>
Co-authored-by: Chris S. Kim <ckim@hashicorp.com>
Co-authored-by: Daniel Nephin <dnephin@hashicorp.com>
Co-authored-by: Kyle Havlovitz <kylehav@gmail.com>
* ui: Ensure dc selector correctly shows the currently selected dc
* ui: Restrict access to non-default partitions in non-primaries (#11420)
This PR restricts access via the UI to only the default partition when in a non-primary datacenter i.e. you can only have multiple (non-default) partitions in the primary datacenter.