This should very slightly reduce the amount of memory required to store each item in
the cache.
It will also enable setting different TTLs based on the type of result. For example
we may want to use a shorter TTL when the result indicates the resource does not exist,
as storing these types of records could easily lead to a DOS caused by
OOM.
These two notify functions are very similar. There appear to be just
enough differences that trying to parameterize the differences may not
improve things.
For now, reduce some of the cosmetic differences so that the material
differences are more obvious.
Use named returned so that the caller has a better idea of what these
bools mean.
Return early to reduce the scope, and make it more obvious what values
are returned in which cases. Also reduces the number of conditional
expressions in each case.
The test had two racy bugs related to memdb references.
The first was when we initially populated data and retained the FederationState objects in a slice. Due to how the `inmemCodec` works these were actually the identical objects passed into memdb.
The second was that the `checkSame` assertion function was reading from memdb and setting the RaftIndexes to zeros to aid in equality checks. This was mutating the contents of memdb which is a no-no.
With this fix, the command:
```
i=0; while /usr/local/bin/go test -count=1 -timeout 30s github.com/hashicorp/consul/agent/consul -run '^(TestReplication_FederationStates)$'; do i=$((i + 1)); printf "$i "; done
```
That used to break on my machine in less than 20 runs is now running 150+ times without any issue.
Might also fix#7575
On recent Mac OS versions, the ulimit defaults to 256 by default, but many
systems (eg: some Linux distributions) often limit this value to 1024.
On validation of configuration, Consul now validates that the number of
allowed files descriptors is bigger than http_max_conns_per_client.
This make some unit tests failing on Mac OS.
Use a less important value in unit test, so tests runs well by default
on Mac OS without need for tuning the OS.
* Enable filtering language support for the v1/connect/intentions listing API
* Update website for filtering of Intentions
* Update website/source/api/connect/intentions.html.md
This change moves all the typeEntry lookups to the first step in the exported methods,
and makes unexporter internals accept the typeEntry struct.
This change is primarily intended to make it easier to extract the container of caches
from the Cache type.
It may incidentally reduce locking in fetch, but that was not a goal.
Sometimes, in the CI, it could receive a SIGURG, producing this line:
FAIL: TestForwardSignals/signal-interrupt (0.06s)
util_test.go:286: expected to read line "signal: interrupt" but got "signal: urgent I/O condition"
Only forward the signals we test to avoid this kind of false positive
Example of such unstable errors in CI:
https://circleci.com/gh/hashicorp/consul/153571
Exposing checks is supposed to allow a Consul agent bound to a different
IP address (e.g., in a different Kubernetes pod) to access healthchecks
through the proxy while the underlying service binds to localhost. This
is an important security feature that makes sure no external traffic
reaches the service except through the proxy.
However, as far as I can tell, this is subtly broken in the case where
the Consul agent cannot reach the proxy over localhost.
If a proxy is configured with: `{ LocalServiceAddress: "127.0.0.1",
Checks: true }`, as is typical with a sidecar proxy, the Consul checks
are currently rewritten to `127.0.0.1:<random port>`. A Consul agent
that does not share the loopback address cannot reach this address. Just
to make sure I was not misunderstanding, I tried configuring the proxy
with `{ LocalServiceAddress: "<pod ip>", Checks: true }`. In this case,
while the checks are rewritten as expected and the agent can reach the
dynamic port, the proxy can no longer reach its backend because the
traffic is no longer on the loopback interface.
I think rewriting the checks to use `proxy.Address`, the proxy's own
address, is more correct in this case. That is the IP where the proxy
can be reached, both by other proxies and by a Consul agent running on
a different IP. The local service address should continue to use
`127.0.0.1` in most cases.
If a proxied service is a gRPC or HTTP2 service, but a path is exposed
using the HTTP1 or TCP protocol, Envoy should not be configured with
`http2ProtocolOptions` for the cluster backing the path.
A situation where this comes up is a gRPC service whose healthcheck or
metrics route (e.g. for Prometheus) is an HTTP1 service running on
a different port. Previously, if these were exposed either using
`Expose: { Checks: true }` or `Expose: { Paths: ... }`, Envoy would
still be configured to communicate with the path over HTTP2, which would
not work properly.
I spent some time today on my local Mac to figure out why Consul 1.6.3+
was not accepting limits.http_max_conns_per_client.
This adds an explicit check on number of file descriptors to be sure
it might work (this is no guarantee as if many clients are reaching
the agent, it might consume even more file descriptors)
Anyway, many users are fighting with RLIMIT_NOFILE, having a clear
message would allow them to figure out what to fix.
Example of message (reload or start):
```
2020-03-11T16:38:37.062+0100 [ERROR] agent: Error starting agent: error="system allows a max of 512 file descriptors, but limits.http_max_conns_per_client: 8192 needs at least 8212"
```
Due to merge #7562, upstream does not compile anymore.
Error is:
ERRO Running error: gofmt: analysis skipped: errors in package: [/Users/p.souchay/go/src/github.com/hashicorp/consul/agent/config_endpoint_test.go:188:33: too many arguments]
This function now only starts the agent.
Using:
git grep -l 'StartTestAgent(t, true,' | \
xargs sed -i -e 's/StartTestAgent(t, true,/StartTestAgent(t,/g'
When run in with `-dev` in DevMode, it is not possible to replace
the embeded UI with another one because `-dev` implies `-ui`.
This commit allows this an slightly change the error message
about Consul 0.7.0 which is very old and does not apply to
current version anyway.
This config entry will be used to configure terminating gateways.
It accepts the name of the gateway and a list of services the gateway will represent.
For each service users will be able to specify: its name, namespace, and additional options for TLS origination.
Co-authored-by: Kyle Havlovitz <kylehav@gmail.com>
Co-authored-by: Chris Piraino <cpiraino@hashicorp.com>
* Add Ingress gateway config entry and other relevant structs
* Add api package tests for ingress gateways
* Embed EnterpriseMeta into ingress service struct
* Add namespace fields to api module and test consul config write decoding
* Don't require a port for ingress gateways
* Add snakeJSON and camelJSON cases in command test
* Run Normalize on service's ent metadata
Sadly cannot think of a way to test this in OSS.
* Every protocol requires at least 1 service
* Validate ingress protocols
* Update agent/structs/config_entry_gateways.go
Co-authored-by: Chris Piraino <cpiraino@hashicorp.com>
Co-authored-by: Freddy <freddygv@users.noreply.github.com>
Previously the log output included the test name twice and a long date
format. The test output is already grouped by test, so adding the test
name did not add any new information. The date and time are only useful
to understand elapsed time, so using a short format should provide
succident detail.
Also fixed a bug in NewTestAgentWithFields where nil was returned
instead of the test agent.
This test would occasionally fail because we checked for a status of
"critical" initially. This races with the actual healthcheck being run
and declared passing.
We instead use a ttl health check so that we don't rely on timing at all.
To reduce the chance of some tests not being run because it does not
match the regex passed to '-run'.
Also document why some tests are allowed to be skipped on CI.
If the CI environment is not correct for running tests the tests
should fail, so that we don't accidentally stop running some tests
because of a change to our CI environment.
Also removed a duplicate delcaration from init. I believe one was
overriding the other as they are both in the same package.
These changes are necessary to ensure advertisement happens correctly even when datacenters are connected via network areas in Consul enterprise.
This also changes how we check if ACLs can be upgraded within the local datacenter. Previously we would iterate through all LAN members. Now we just use the ServerLookup type to iterate through all known servers in the DC.
This is like a Möbius strip of code due to the fact that low-level components (serf/memberlist) are connected to high-level components (the catalog and mesh-gateways) in a twisty maze of references which make it hard to dive into. With that in mind here's a high level summary of what you'll find in the patch:
There are several distinct chunks of code that are affected:
* new flags and config options for the server
* retry join WAN is slightly different
* retry join code is shared to discover primary mesh gateways from secondary datacenters
* because retry join logic runs in the *agent* and the results of that
operation for primary mesh gateways are needed in the *server* there are
some methods like `RefreshPrimaryGatewayFallbackAddresses` that must occur
at multiple layers of abstraction just to pass the data down to the right
layer.
* new cache type `FederationStateListMeshGatewaysName` for use in `proxycfg/xds` layers
* the function signature for RPC dialing picked up a new required field (the
node name of the destination)
* several new RPCs for manipulating a FederationState object:
`FederationState:{Apply,Get,List,ListMeshGateways}`
* 3 read-only internal APIs for debugging use to invoke those RPCs from curl
* raft and fsm changes to persist these FederationStates
* replication for FederationStates as they are canonically stored in the
Primary and replicated to the Secondaries.
* a special derivative of anti-entropy that runs in secondaries to snapshot
their local mesh gateway `CheckServiceNodes` and sync them into their upstream
FederationState in the primary (this works in conjunction with the
replication to distribute addresses for all mesh gateways in all DCs to all
other DCs)
* a "gateway locator" convenience object to make use of this data to choose
the addresses of gateways to use for any given RPC or gossip operation to a
remote DC. This gets data from the "retry join" logic in the agent and also
directly calls into the FSM.
* RPC (`:8300`) on the server sniffs the first byte of a new connection to
determine if it's actually doing native TLS. If so it checks the ALPN header
for protocol determination (just like how the existing system uses the
type-byte marker).
* 2 new kinds of protocols are exclusively decoded via this native TLS
mechanism: one for ferrying "packet" operations (udp-like) from the gossip
layer and one for "stream" operations (tcp-like). The packet operations
re-use sockets (using length-prefixing) to cut down on TLS re-negotiation
overhead.
* the server instances specially wrap the `memberlist.NetTransport` when running
with gateway federation enabled (in a `wanfed.Transport`). The general gist is
that if it tries to dial a node in the SAME datacenter (deduced by looking
at the suffix of the node name) there is no change. If dialing a DIFFERENT
datacenter it is wrapped up in a TLS+ALPN blob and sent through some mesh
gateways to eventually end up in a server's :8300 port.
* a new flag when launching a mesh gateway via `consul connect envoy` to
indicate that the servers are to be exposed. This sets a special service
meta when registering the gateway into the catalog.
* `proxycfg/xds` notice this metadata blob to activate additional watches for
the FederationState objects as well as the location of all of the consul
servers in that datacenter.
* `xds:` if the extra metadata is in place additional clusters are defined in a
DC to bulk sink all traffic to another DC's gateways. For the current
datacenter we listen on a wildcard name (`server.<dc>.consul`) that load
balances all servers as well as one mini-cluster per node
(`<node>.server.<dc>.consul`)
* the `consul tls cert create` command got a new flag (`-node`) to help create
an additional SAN in certs that can be used with this flavor of federation.
This fixes issue #7318
Between versions 1.5.2 and 1.5.3, a regression has been introduced regarding health
of services. A patch #6144 had been issued for HealthChecks of nodes, but not for healthchecks
of services.
What happened when a reload was:
1. save all healthcheck statuses
2. cleanup everything
3. add new services with healthchecks
In step 3, the state of healthchecks was taken into account locally,
so at step 3, but since we cleaned up at step 2, state was lost.
This PR introduces the snap parameter, so step 3 can use information from step 1
This will avoid adding format=prometheus in request and to parse
more easily metrics using Prometheus.
This commit follows https://github.com/hashicorp/consul/pull/6514 as
the PR has been closed and extends it by accepting old Prometheus
mime-type.
* xDS Mesh Gateway Resolver Subset Fixes
The first fix was that clusters were being generated for every service resolver subset regardless of there being any service instances of the associated service in that dc. The previous logic didn’t care at all but now it will omit generating those clusters unless we also have service instances that should be proxied.
The second fix was to respect the DefaultSubset of a service resolver so that mesh-gateways would configure the endpoints of the unnamed subset cluster to only those endpoints matched by the default subsets filters.
* Refactor the gateway endpoint generation to be a little easier to read
Fixes#7231. Before an agent would always emit a warning when there is
an encrypt key in the configuration and an existing keyring stored,
which is happening on restart.
Now it only emits that warning when the encrypt key from the
configuration is not part of the keyring.
* agent: measure blocking queries
* agent.rpc: update docs to mention we only record blocking queries
* agent.rpc: make go fmt happy
* agent.rpc: fix non-atomic read and decrement with bitwise xor of uint64 0
* agent.rpc: clarify review question
* agent.rpc: today I learned that one must declare all variables before interacting with goto labels
* Update agent/consul/server.go
agent.rpc: more precise comment on `Server.queriesBlocking`
Co-Authored-By: Paul Banks <banks@banksco.de>
* Update website/source/docs/agent/telemetry.html.md
agent.rpc: improve queries_blocking description
Co-Authored-By: Paul Banks <banks@banksco.de>
* agent.rpc: fix some bugs found in review
* add a note about the updated counter behavior to telemetry.md
* docs: add upgrade-specific note on consul.rpc.quer{y,ies_blocking} behavior
Co-authored-by: Paul Banks <banks@banksco.de>
We set RawToString=true so that []uint8 => string when decoding an interface{}.
We set the MapType so that map[interface{}]interface{} decodes to map[string]interface{}.
Add tests to ensure that this doesn't break existing usages.
Fixes#7223
* fix LeaderTest_ChangeNodeID to use StatusLeft and add waitForAnyLANLeave
* unextract the waitFor... fn, simplify, and provide a more descriptive error
This fixes#7020.
There are two problems this PR solves:
* if the node info changes it is highly likely to get service and check registration permission errors unless those service tokens have node:write. Hopefully services you register don’t have this permission.
* the timer for a full sync gets reset for every partial sync which means that many partial syncs are preventing a full sync from happening
Instead of syncing node info last, after services and checks, and possibly saving one RPC because it is included in every service sync, I am syncing node info first. It is only ever going to be a single RPC that we are only doing when node info has changed. This way we are guaranteed to sync node info even when something goes wrong with services or checks which is more likely because there are more syncs happening for them.
Previously this happened to be validating only the chains in the default namespace. Now it will validate all chains in all namespaces when the global proxy-defaults is changed.
* Cleanup the discovery chain compilation route handling
Nothing functionally should be different here. The real difference is that when creating new targets or handling route destinations we use the router config entries name and namespace instead of that of the top level request. Today they SHOULD always be the same but that may not always be the case. This hopefully also makes it easier to understand how the router entries are handled.
* Refactor a small bit of the service manager tests in oss
We used to use the stringHash function to compute part of the filename where things would get persisted to. This has been changed in the core code to calling the StringHash method on the ServiceID type. It just so happens that the new method will output the same value for anything in the default namespace (by design actually). However, logically this filename computation in the test should do the same thing as the core code itself so I updated it here.
Also of note is that newer enterprise-only tests for the service manager cannot use the old stringHash function at all because it will produce incorrect results for non-default namespaces.
The previous value was too conservative and users with many instances
were having problems because of it. This change increases the limit to
8192 which reportedly fixed most of the issues with that.
Related: #4984, #4986, #5050.
This adds acl enforcement to the two endpoints that were missing it.
Note that in the case of getting a services health by its id, we still
must first lookup the service so we still "leak" information about a
service with that ID existing. There isn't really a way around it though
as ACLs are meant to check service names.
* Updates to the Txn API for namespaces
* Update agent/consul/txn_endpoint.go
Co-Authored-By: R.B. Boyer <rb@hashicorp.com>
Co-authored-by: R.B. Boyer <public@richardboyer.net>
The backing RPC already existed but the endpoint will be useful for other service syncing processes such as consul-k8s as this endpoint can return all services registered with a node regardless of namespacing.
* Fix segfault when removing both a service and associated check
updateSyncState creates entries in the services and checks maps for
remote services/checks that are not found locally, so that we can then
make sure to delete them in our reconciliation process. However, the
values added to the map are missing key fields that the rest of the code
expects to not be nil.
* Add comment stating Check field can be nil
Something similar already happens inside of the server
(agent/consul/server.go) but by doing it in the general config parsing
for the agent we can have agent-level code rely on the PrimaryDatacenter
field, too.
* Increase raft notify buffer.
Fixes https://github.com/hashicorp/consul/issues/6852.
Increasing the buffer helps recovering from leader flapping. It lowers
the chances of the flapping leader to get into a deadlock situation like
described in #6852.
- Explicitly wait to start the test until the initial AE sync of the node.
- Run the blocking query in the main goroutine to cut down on possible
poor goroutine scheduling issues being to blame for delays.
- If the blocking query is woken up with no index change, rerun the
query. This may happen if the CI server is loaded and time dilation is
happening.
Currently when using the built-in CA provider for Connect, root certificates are valid for 10 years, however secondary DCs get intermediates that are valid for only 1 year. There is no mechanism currently short of rotating the root in the primary that will cause the secondary DCs to renew their intermediates.
This PR adds a check that renews the cert if it is half way through its validity period.
In order to be able to test these changes, a new configuration option was added: IntermediateCertTTL which is set extremely low in the tests.
* Add CreateCSRWithSAN
* Use CreateCSRWithSAN in auto_encrypt and cache
* Copy DNSNames and IPAddresses to cert
* Verify auto_encrypt.sign returns cert with SAN
* provide configuration options for auto_encrypt dnssan and ipsan
* rename CreateCSRWithSAN to CreateCSR
* Use consts for well known tagged adress keys
* Add ipv4 and ipv6 tagged addresses for node lan and wan
* Add ipv4 and ipv6 tagged addresses for service lan and wan
* Use IPv4 and IPv6 address in DNS
Deregistering a service from the catalog automatically deregisters its
checks, however the agent still performs a deregister call for each
service checks even after the service has been deregistered.
With ACLs enabled this results in logs like:
"message:consul: "Catalog.Deregister" RPC failed to server
server_ip:8300: rpc error making call: rpc error making call: Unknown
check 'check_id'"
This change removes associated checks from the agent state when
deregistering a service, which results in less calls to the servers and
supresses the error logs.
* Renamed structs.IntentionWildcard to structs.WildcardSpecifier
* Refactor ACL Config
Get rid of remnants of enterprise only renaming.
Add a WildcardName field for specifying what string should be used to indicate a wildcard.
* Add wildcard support in the ACL package
For read operations they can call anyAllowed to determine if any read access to the given resource would be granted.
For write operations they can call allAllowed to ensure that write access is granted to everything.
* Make v1/agent/connect/authorize namespace aware
* Update intention ACL enforcement
This also changes how intention:read is granted. Before the Intention.List RPC would allow viewing an intention if the token had intention:read on the destination. However Intention.Match allowed viewing if access was allowed for either the source or dest side. Now Intention.List and Intention.Get fall in line with Intention.Matches previous behavior.
Due to this being done a few different places ACL enforcement for a singular intention is now done with the CanRead and CanWrite methods on the intention itself.
* Refactor Intention.Apply to make things easier to follow.
Sometimes, we have lots of errors in cross calls between DCs (several hundreds / sec)
Enrich the log in order to help diagnose the root cause of issue.
Before we were issuing 1 watch for every service in the services listing which would have caused the agent to process many more identical events simultaneously.
Restore a few more service-kind index updates so blocking in ServiceDump works in more cases
Namely one omission was that check updates for dumped services were not
unblocking.
Also adds a ServiceDump state store test and also fix a watch bug with the
normal dump.
Follow-on from #6916
• Renamed EnterpriseACLConfig to just Config
• Removed chained_authorizer_oss.go as it was empty
• Renamed acl.go to errors.go to more closely describe its contents
* Add updated github.com/miekg/dns to go modules
* Add updated github.com/miekg/dns to vendor
* Fix github.com/miekg/dns api breakage
* Decrease size when trimming UDP packets
Need more room for the header(?), if we don't decrease the size we get an
"overflow unpacking uint32" from the dns library
* Fix dns truncate tests with api changes
* Make windows build working again. Upgrade x/sys and x/crypto and vendor
This upgrade is needed because of API breakage in x/sys introduced
by the minimal x/sys dependency of miekg/dns
This API breakage has been fixed in commit
855e68c859
* relax requirements for auto_encrypt on server
* better error message when auto_encrypt and verify_incoming on
* docs: explain verify_incoming on Consul clients.
* Increase number to test ignore. Consul Enterprise has more flags and since we are trying to reduce the differences between both code bases, we are increasing the number in oss. The semantics don't change, it is just a cosmetic thing.
* Introduce agent.initEnterprise for enterprise related hooks.
* Sync test with ent version.
* Fix import order.
* revert error wording.
Ensure we close the Sentinel Evaluator so as not to leak go routines
Fix a bunch of test logging so that various warnings when starting a test agent go to the ltest logger and not straight to stdout.
Various canned ent meta types always return a valid pointer (no more nils). This allows us to blindly deref + assign in various places.
Update ACL index tracking to ensure oss -> ent upgrades will work as expected.
Update ent meta parsing to include function to disallow wildcarding.
Also update the Docs and fixup the HTTP API to return proper errors when someone attempts to use Namespaces with an OSS agent.
Add Namespace HTTP API docs
Make all API endpoints disallow unknown fields
* Implement endpoint to query whether the given token is authorized for a set of operations
* Updates to allow for remote ACL authorization via RPC
This is only used when making an authorization request to a different datacenter.
* Adds 'limits' field to the upstream configuration of a connect proxy
This allows a user to configure the envoy connect proxy with
'max_connections', 'max_queued_requests', and 'max_concurrent_requests'. These
values are defined in the local proxy on a per-service instance basis
and should thus NOT be thought of as a global-level or even service-level value.
* Update AWS SDK to use PCA features.
* Add AWS PCA provider
* Add plumbing for config, config validation tests, add test for inheriting existing CA resources created by user
* Unparallel the tests so we don't exhaust PCA limits
* Merge updates
* More aggressive polling; rate limit pass through on sign; Timeout on Sign and CA create
* Add AWS PCA docs
* Fix Vault doc typo too
* Doc typo
* Apply suggestions from code review
Co-Authored-By: R.B. Boyer <rb@hashicorp.com>
Co-Authored-By: kaitlincarter-hc <43049322+kaitlincarter-hc@users.noreply.github.com>
* Doc fixes; tests for erroring if State is modified via API
* More review cleanup
* Uncomment tests!
* Minor suggested clean ups
Replaces WaitForLeader with WaitForTestAgent. This waits to make sure
that the node itself is correctly registered in the catalog before
attempting additional registrations.
* Change CA Configure struct to pass Datacenter through
* Remove connect/ca/plugin as we don't have immediate plans to use it.
We still intend to one day but there are likely to be several changes to the CA provider interface before we do so it's better to rebuild from history when we do that work properly.
* Rename PrimaryDC; fix endpoint in secondary DCs
Refactor dns to have same behavior between A and SRV.
Current implementation returns the node name instead of the service
address.
With this fix when querying for SRV record service address is return in
the SRV record.
And when performing a simple dns lookup it returns a CNAME to the
service address.
* Support Connect CAs that can't cross sign
* revert spurios mod changes from make tools
* Add log warning when forcing CA rotation
* Fixup SupportsCrossSigning to report errors and work with Plugin interface (fixes tests)
* Fix failing snake_case test
* Remove misleading comment
* Revert "Remove misleading comment"
This reverts commit bc4db9cabed8ad5d0e39b30e1fe79196d248349c.
* Remove misleading comment
* Regen proto files messed up by rebase
* pass logger through to provider
* test for proper operation of NeedsLogger
* remove public testServer function
* Ooops actually set the logger in all the places we need it - CA config set wasn't and causing segfault
* Fix all the other places in tests where we set the logger
* Allow CA Providers to persist some state
* Update CA provider plugin interface
* Fix plugin stubs to match provider changes
* Update agent/connect/ca/provider.go
Co-Authored-By: R.B. Boyer <rb@hashicorp.com>
* Cleanup review comments
* add NeedsLogger to Provider interface
* implements NeedsLogger in default provider
* pass logger through to provider
* test for proper operation of NeedsLogger
* remove public testServer function
* Switch test to actually assert on logging output rather than reflection.
--amend
* Ooops actually set the logger in all the places we need it - CA config set wasn't and causing segfault
* Fix all the other places in tests where we set the logger
* Add TODO comment
* Allow RSA CA certs for consul and vault providers to correctly sign EC leaf certs.
* Ensure key type ad bits are populated from CA cert and clean up tests
* Add integration test and fix error when initializing secondary CA with RSA key.
* Add more tests, fix review feedback
* Update docs with key type config and output
* Apply suggestions from code review
Co-Authored-By: R.B. Boyer <rb@hashicorp.com>
Main Changes:
• method signature updates everywhere to account for passing around enterprise meta.
• populate the EnterpriseAuthorizerContext for all ACL related authorizations.
• ACL resource listings now operate like the catalog or kv listings in that the returned entries are filtered down to what the token is allowed to see. With Namespaces its no longer all or nothing.
• Modified the acl.Policy parsing to abstract away basic decoding so that enterprise can do it slightly differently. Also updated method signatures so that when parsing a policy it can take extra ent metadata to use during rules validation and policy creation.
Secondary Changes:
• Moved protobuf encoding functions out of the agentpb package to eliminate circular dependencies.
• Added custom JSON unmarshalers for a few ACL resource types (to support snake case and to get rid of mapstructure)
• AuthMethod validator cache is now an interface as these will be cached per-namespace for Consul Enterprise.
• Added checks for policy/role link existence at the RPC API so we don’t push the request through raft to have it fail internally.
• Forward ACL token delete request to the primary datacenter when the secondary DC doesn’t have the token.
• Added a bunch of ACL test helpers for inserting ACL resource test data.
Previously the logic for configuring RDS during LDS for L7 upstreams was
overapplied to TCP proxies resulting in a cluster name of <emptystring>
being used incorrectly.
Fixes#6621
If acls have not yet replicated to the secondary then authz requests
will be remotely resolved by the primary. Now these tests explicitly
wait until replication has caught up first.
* ACL Authorizer overhaul
To account for upcoming features every Authorization function can now take an extra *acl.EnterpriseAuthorizerContext. These are unused in OSS and will always be nil.
Additionally the acl package has received some thorough refactoring to enable all of the extra Consul Enterprise specific authorizations including moving sentinel enforcement into the stubbed structs. The Authorizer funcs now return an acl.EnforcementDecision instead of a boolean. This improves the overall interface as it makes multiple Authorizers easily chainable as they now indicate whether they had an authoritative decision or should use some other defaults. A ChainedAuthorizer was added to handle this Authorizer enforcement chain and will never itself return a non-authoritative decision.
* Include stub for extra enterprise rules in the global management policy
* Allow for an upgrade of the global-management policy
A check may be set to become passing/critical only if a specified number of successive
checks return passing/critical in a row. Status will stay identical as before until
the threshold is reached.
This feature is available for HTTP, TCP, gRPC, Docker & Monitor checks.
* Implement leader routine manager
Switch over the following to use it for go routine management:
• Config entry Replication
• ACL replication - tokens, policies, roles and legacy tokens
• ACL legacy token upgrade
• ACL token reaping
• Intention Replication
• Secondary CA Roots Watching
• CA Root Pruning
Also added the StopAll call into the Server Shutdown method to ensure all leader routines get killed off when shutting down.
This should be mostly unnecessary as `revokeLeadership` should manually stop each one but just in case we really want these to go away (eventually).
This only works so long as we use simplistic protobuf types. Constructs such as oneof or Any types that require type annotations for decoding properly will fail hard but that is by design. If/when we want to use any of that we will probably need to consider a v2 API.
* Add JSON and Binary Marshaler Generators for Protobuf Types
* Generate files with the correct version of gogo/protobuf
I have pinned the version in the makefile so when you run make tools you get the right version. This pulls the version out of go.mod so it should remain up to date.
The version at the time of this commit we are using is v1.2.1
* Fixup some shell output
* Update how we determine the version of gogo
This just greps the go.mod file instead of expecting the go mod cache to already be present
* Fixup vendoring and remove no longer needed json encoder functions
## HTTPAdapter (#5637)
## Ember upgrade 2.18 > 3.12 (#6448)
### Proxies can no longer get away with not calling _super
This means that we can't use create anymore to define dynamic methods.
Therefore we dynamically make 2 extended Proxies on demand, and then
create from those. Therefore we can call _super in the init method of
the extended Proxies.
### We aren't allowed to reset a service anymore
We never actually need to now anyway, this is a remnant of the refactor
from browser based confirmations. We fix it as simply as possible here
but will revisit and remove the old browser confirm functionality at a
later date
### Revert classes to use ES5 style to workaround babel transp. probs
Using a mixture of ES6 classes (and hence super) and arrow functions
means that when babel transpiles the arrow functions down to ES5, a
reference to this is moved before the call to super, hence causing a js
error.
Furthermore, we the testing environment no longer lets use use
apply/call on the constructor.
These errors only manifests during testing (only in the testing
environment), the application itself runs fine with no problems without
this change.
Using ES5 style class definitions give us freedom to do all of the above
without causing any errors, so we reverted these classes back to ES5
class definitions
### Skip test that seems to have changed due to a change in RSVP timing
This test tests a usecase/area of the API that will probably never ever
be used, it was more testing out the API. We've skipped the test for now
as this doesn't affect the application itself, but left a note to come
back here later to investigate further
### Remove enumerableContentDidChange
Initial testing looks like we don't need to call this function anymore,
the function no longer exists
### Rework Changeset.isSaving to take into account new ember APIs
Setting/hanging a computedProperty of an instantiated object no longer
works. Move to setting it on the prototype/class definition instead
### Change how we detect whether something requires listening
New ember API's have changed how you can detect whether something is a
computedProperty or not. It's not immediately clear if its even possible
now. Therefore we change how we detect whether something should be
listened to or not by just looking for presence of `addEventListener`
### Potentially temporary change of ci test scripts to ensure deps exist
All our tooling scripts run through a Makefile (for people familiar with
only using those), which then call yarn scripts which can be called
independently (for people familar with only using yarn).
The Makefile targets always check to make sure all the dependencies are
installed before running anything that requires them (building, testing
etc).
The CI scripts/targets didn't follow this same route and called the yarn
scripts directly (usually CI builds a cache of the dependencies first).
For some reason this cache isn't doing what it usually does, and it
looks as though, in CI, ember isn't installed.
This commit makes the CI scripts consistently use the same method as all
of the other tooling scripts (Makefile target > Install Deps if
required > call yarn script). This should install the dependencies if
for some reason the CI cache building doesn't complete/isn't successful.
Potentially this commit may be reverted if, the root of the problem is
elsewhere, although consistency is always good, so it might be a good
idea to leave this commit as is even if we need to debug and fix things
elsewhere.
### Make test-parallel consistent with the rest of the tooling scripts
As we are here making changes for CI purposes (making test-ci
consistent), we spotted that test-parallel is also inconsistent and also
the README manual instructions won't work without `ember` installed
globally.
This commit makes everything consistent and changes the manual
instructions to use the local ember instance that gets installed via
yarn
### Re-wrangle catchable to fit with new ember 3.12 APIs
In the upgrade from ember 3.8 > 3.12 the public interfaces for
ComputedProperties have changed slightly. `meta` is no longer a public
property of ComputedProperty but of a ComputedDecoratorImpl mixin
instead.
7e4ba1096e/packages/%40ember/-internals/metal/lib/computed.ts (L725)
There seems to be no way, by just using publically available
methods, to replicate this behaviour so that we can create our own
'ComputedProperty` factory via injecting the ComputedProperty class as
we did previously.
3f333bada1/ui-v2/app/utils/computed/factory.js (L1-L18)
Instead we dynamically hang our `Catchable` `catch` method off the
instantiated ComputedProperty. In doing it like this `ComputedProperty`
has already has its `meta` method mixed in so we don't have to manually
mix it in ourselves (which doesn't seem possible)
This functionality is only used during our work in trying to ensure
our EventSource/BlockingQuery work was as 'ember-like' as possible (i.e.
using the traditional Route.model hooks and ember-like Controller
properties). Our ongoing/upcoming work on a componentized approach to
data a.k.a `<DataSource />` means we will be able to remove the majority
of the code involved here now that it seems to be under an amount of
flux in ember.
### Build bindata_assetfs.go with new UI changes
* Add support for parameterizing the ACL config used with a TestAgent
Using tokens that are UUIDs will get rid of some warnings
* Refactor to allow setting all tokens and change the template to ignore unset values.
This fixes an issue where leaf certificates issued in secondary
datacenters would be reissued very frequently (every ~20 seconds)
because the logic meant to detect root rotation was errantly triggering
because a hash of the ultimate root (in the primary) was being compared
against a hash of the local intermediate root (in the secondary) and
always failing.
Fixes#6521
Ensure that initial failures to fetch an agent cache entry using the
notify API where the underlying RPC returns a synthetic index of 1
correctly recovers when those RPCs resume working.
The bug in the Cache.notifyBlockingQuery used to incorrectly "fix" the
index for the next query from 0 to 1 for all queries, when it should
have not done so for queries that errored.
Also fixed some things that made debugging difficult:
- config entry read/list endpoints send back QueryMeta headers
- xds event loops don't swallow the cache notification errors
In a previous PR I made it so that we had interfaces that would work enough to allow blockingQueries to work. However to complete this we need all fields to be settable and gettable.
Notes:
• If Go ever gets contracts/generics then we could get rid of all the Getters/Setters
• protoc / protoc-gen-gogo are going to generate all the getters for us.
• I copied all the getters/setters from the protobuf funcs into agent/structs/protobuf_compat.go
• Also added JSON marshaling funcs that use jsonpb for protobuf types.
Fixes: #5396
This PR adds a proxy configuration stanza called expose. These flags register
listeners in Connect sidecar proxies to allow requests to specific HTTP paths from outside of the node. This allows services to protect themselves by only
listening on the loopback interface, while still accepting traffic from non
Connect-enabled services.
Under expose there is a boolean checks flag that would automatically expose all
registered HTTP and gRPC check paths.
This stanza also accepts a paths list to expose individual paths. The primary
use case for this functionality would be to expose paths for third parties like
Prometheus or the kubelet.
Listeners for requests to exposed paths are be configured dynamically at run
time. Any time a proxy, or check can be registered, a listener can also be
created.
In this initial implementation requests to these paths are not
authenticated/encrypted.
Also:
* Finished threading replaceExistingChecks setting (from GH-4905)
through service manager.
* Respected the original configSource value that was used to register a
service or a check when restoring persisted data.
* Run several existing tests with and without central config enabled
(not exhaustive yet).
* Switch to ioutil.ReadFile for all types of agent persistence.
The fields in the certs are meant to hold the original binary
representation of this data, not some ascii-encoded version.
The only time we should be colon-hex-encoding fields is for display
purposes or marshaling through non-TLS mediums (like RPC).
This only affects vault versions >=1.1.1 because the prior code
accidentally relied upon a bug that was fixed in
https://github.com/hashicorp/vault/pull/6505
The existing tests should have caught this, but they were using a
vendored copy of vault version 0.10.3. This fixes the tests by running
an actual copy of vault instead of an in-process copy. This has the
added benefit of changing the dependency on vault to just vault/api.
Also update VaultProvider to use similar SetIntermediate validation code
as the ConsulProvider implementation.
* Add build system support for protobuf generation
This is done generically so that we don’t have to keep updating the makefile to add another proto generation.
Note: anything not in the vendor directory and with a .proto extension will be run through protoc if the corresponding namespace.pb.go file is not up to date.
If you want to rebuild just a single proto file you can do so with: make proto-rebuild PROTOFILES=<list of proto files to rebuild>
Providing the PROTOFILES var will override the default behavior of finding all the .proto files.
* Start adding types to the agent/proto package
These will be needed for some other work and are by no means comprehensive.
* Add ability to resolve/fixup the agentpb.ACLLinks structure in the state store.
* Use protobuf marshalling of raft requests instead of msgpack for protoc generated types.
This does not change any encoding of existing types.
* Removed structs package automatically encoding with protobuf marshalling
Instead the caller of raftApply that wants to opt-in to protobuf encoding will have to call `raftApplyProtobuf`
* Run update-vendor to fixup modules.txt
Nothing changed as far as dependencies go but the ordering of modules in that file depends on the time they are first seen and its not alphabetical.
* Rename some things and implement the structs.RPCInfo interface bits
agentpb.QueryOptions and agentpb.WriteRequest implement 3 of the 4 RPCInfo funcs and the new TargetDatacenter message type implements the fourth.
* Use the right encoding function.
* Renamed agent/proto package to agent/agentpb to prevent package name conflicts
* Update modules.txt to fix ordering
* Change blockingQuery to take in interfaces for the query options and meta
* Add %T to error output.
* Add/Update some comments
This should cut down on test flakiness.
Problems handled:
- If you had enough parallel test cases running, the former circular
approach to handling the port block could hand out the same port to
multiple cases before they each had a chance to bind them, leading to
one of the two tests to fail.
- The freeport library would allocate out of the ephemeral port range.
This has been corrected for Linux (which should cover CI).
- The library now waits until a formerly-in-use port is verified to be
free before putting it back into circulation.
In normal operations there is a read/write race related to request
QueryOptions fields. An example race:
WARNING: DATA RACE
Read at 0x00c000836950 by goroutine 30:
github.com/hashicorp/consul/agent/structs.(*ServiceConfigRequest).CacheInfo()
/go/src/github.com/hashicorp/consul/agent/structs/config_entry.go:506 +0x109
github.com/hashicorp/consul/agent/cache.(*Cache).getWithIndex()
/go/src/github.com/hashicorp/consul/agent/cache/cache.go:262 +0x5c
github.com/hashicorp/consul/agent/cache.(*Cache).notifyBlockingQuery()
/go/src/github.com/hashicorp/consul/agent/cache/watch.go:89 +0xd7
Previous write at 0x00c000836950 by goroutine 147:
github.com/hashicorp/consul/agent/cache-types.(*ResolvedServiceConfig).Fetch()
/go/src/github.com/hashicorp/consul/agent/cache-types/resolved_service_config.go:31 +0x219
github.com/hashicorp/consul/agent/cache.(*Cache).fetch.func1()
/go/src/github.com/hashicorp/consul/agent/cache/cache.go:495 +0x112
This patch does a lightweight copy of the request struct so that the
embedded QueryOptions fields that are mutated during Fetch() are scoped
to just that one RPC.
* Store primaries root in secondary after intermediate signature
This ensures that the intermediate exists within the CA root stored in raft and not just in the CA provider state. This has the very nice benefit of actually outputting the intermediate cert within the ca roots HTTP/RPC endpoints.
This change means that if signing the intermediate fails it will not set the root within raft. So far I have not come up with a reason why that is bad. The secondary CA roots watch will pull the root again and go through all the motions. So as soon as getting an intermediate CA works the root will get set.
* Make TestAgentAntiEntropy_Check_DeferSync less flaky
I am not sure this is the full fix but it seems to help for me.
When this test flakes sometimes this happens:
--- FAIL: TestCoordinate_Node (1.69s)
panic: interface conversion: interface {} is nil, not structs.Coordinates [recovered]
FAIL github.com/hashicorp/consul/agent 19.999s
Exit code: 1
panic: interface conversion: interface {} is nil, not structs.Coordinates [recovered]
panic: interface conversion: interface {} is nil, not structs.Coordinates
There is definitely a bug lurking, but the code seems to imply this can
only return nil on 404. The tests previously were not checking the
status code.
The underlying cause of the flake is unknown, but this should turn the
failure into a more normal test failure.
When there is an node name conflicts, such messages are displayed within Consul:
`consul.fsm: EnsureRegistration failed: failed inserting node: Error while renaming Node ID: "e1d456bc-f72d-98e5-ebb3-26ae80d785cf": Node name node001 is reserved by node 05f10209-1b9c-b90c-e3e2-059e64556d4a with name node001`
While it is easy to find the node that has reserved the name, it is hard to find
the node trying to aquire the name since it is not registered, because it
is not part of `consul members` output
This PR will display the IP of the offender and solve far more easily those issues.
The embedded `Server` field on a `DNSServer` is only set inside of the
`ListenAndServe` method. If that method fails for reasons like the
address being in use and is not bindable, then the `Server` field will
not be set and the overall `Agent.Start()` will fail.
This will trigger the inner loop of `TestAgent.Start()` to invoke
`ShutdownEndpoints` which will attempt to pretty print the DNS servers
using fields on that inner `Server` field. Because it was never set,
this causes a nil pointer dereference and crashes the test.
Previously `verify_incoming` was required when turning on `auto_encrypt.allow_tls`, but that doesn't work together with HTTPS UI in some scenarios. Adding `verify_incoming_rpc` to the allowed configurations.
AutoEncrypt needs the server-port because it wants to talk via RPC. Information from gossip might not be available at that point and thats why the server-port is being used.
- Bootstrap escape hatches are OK.
- Public listener/cluster escape hatches are OK.
- Upstream listener/cluster escape hatches are not supported.
If an unsupported escape hatch is configured and the discovery chain is
activated log a warning and act like it was not configured.
Fixes#6160
Compiling this will set an optional SNI field on each DiscoveryTarget.
When set this value should be used for TLS connections to the instances
of the target. If not set the default should be used.
Setting ExternalSNI will disable mesh gateway use for that target. It also
disables several service-resolver features that do not make sense for an
external service.
If the entry is updated for reasons other than protocol it is surprising
that the value is explicitly persisted as 'tcp' rather than leaving it
empty and letting it fall back dynamically on the proxy-defaults value.
Since generated envoy clusters all are named using (mostly) SNI syntax
we can have envoy read the various fields out of that structure and emit
it as stats labels to the various telemetry backends.
I changed the delimiter for the 'customization hash' from ':' to '~'
because ':' is always reencoded by envoy as '_' when generating metrics
keys.
Add parameter local-only to operator keyring list requests to force queries to only hit local servers (no WAN traffic).
HTTP API: GET /operator/keyring?local-only=true
CLI: consul keyring -list --local-only
Sending the local-only flag with any non-GET/list request will result in an error.
Failover is pushed entirely down to the data plane by creating envoy
clusters and putting each successive destination in a different load
assignment priority band. For example this shows that normally requests
go to 1.2.3.4:8080 but when that fails they go to 6.7.8.9:8080:
- name: foo
load_assignment:
cluster_name: foo
policy:
overprovisioning_factor: 100000
endpoints:
- priority: 0
lb_endpoints:
- endpoint:
address:
socket_address:
address: 1.2.3.4
port_value: 8080
- priority: 1
lb_endpoints:
- endpoint:
address:
socket_address:
address: 6.7.8.9
port_value: 8080
Mesh gateways route requests based solely on the SNI header tacked onto
the TLS layer. Envoy currently only lets you configure the outbound SNI
header at the cluster layer.
If you try to failover through a mesh gateway you ideally would
configure the SNI value per endpoint, but that's not possible in envoy
today.
This PR introduces a simpler way around the problem for now:
1. We identify any target of failover that will use mesh gateway mode local or
remote and then further isolate any resolver node in the compiled discovery
chain that has a failover destination set to one of those targets.
2. For each of these resolvers we will perform a small measurement of
comparative healths of the endpoints that come back from the health API for the
set of primary target and serial failover targets. We walk the list of targets
in order and if any endpoint is healthy we return that target, otherwise we
move on to the next target.
3. The CDS and EDS endpoints both perform the measurements in (2) for the
affected resolver nodes.
4. For CDS this measurement selects which TLS SNI field to use for the cluster
(note the cluster is always going to be named for the primary target)
5. For EDS this measurement selects which set of endpoints will populate the
cluster. Priority tiered failover is ignored.
One of the big downsides to this approach to failover is that the failover
detection and correction is going to be controlled by consul rather than
deferring that entirely to the data plane as with the prior version. This also
means that we are bound to only failover using official health signals and
cannot make use of data plane signals like outlier detection to affect
failover.
In this specific scenario the lack of data plane signals is ok because the
effectiveness is already muted by the fact that the ultimate destination
endpoints will have their data plane signals scrambled when they pass through
the mesh gateway wrapper anyway so we're not losing much.
Another related fix is that we now use the endpoint health from the
underlying service, not the health of the gateway (regardless of
failover mode).
In addition to exposing compilation over the API cleaned up the structures that would be exchanged to be cleaner and easier to support and understand.
Also removed ability to configure the envoy OverprovisioningFactor.