Fixes: #5396
This PR adds a proxy configuration stanza called expose. These flags register
listeners in Connect sidecar proxies to allow requests to specific HTTP paths from outside of the node. This allows services to protect themselves by only
listening on the loopback interface, while still accepting traffic from non
Connect-enabled services.
Under expose there is a boolean checks flag that would automatically expose all
registered HTTP and gRPC check paths.
This stanza also accepts a paths list to expose individual paths. The primary
use case for this functionality would be to expose paths for third parties like
Prometheus or the kubelet.
Listeners for requests to exposed paths are be configured dynamically at run
time. Any time a proxy, or check can be registered, a listener can also be
created.
In this initial implementation requests to these paths are not
authenticated/encrypted.
The fields in the certs are meant to hold the original binary
representation of this data, not some ascii-encoded version.
The only time we should be colon-hex-encoding fields is for display
purposes or marshaling through non-TLS mediums (like RPC).
Compiling this will set an optional SNI field on each DiscoveryTarget.
When set this value should be used for TLS connections to the instances
of the target. If not set the default should be used.
Setting ExternalSNI will disable mesh gateway use for that target. It also
disables several service-resolver features that do not make sense for an
external service.
Add parameter local-only to operator keyring list requests to force queries to only hit local servers (no WAN traffic).
HTTP API: GET /operator/keyring?local-only=true
CLI: consul keyring -list --local-only
Sending the local-only flag with any non-GET/list request will result in an error.
In addition to exposing compilation over the API cleaned up the structures that would be exchanged to be cleaner and easier to support and understand.
Also removed ability to configure the envoy OverprovisioningFactor.
* Retry the creation of the test server three times.
* Reduce the retry timeout for the API wait to 2 seconds, opting to fail faster and start over.
* Remove wait for leader from server creation. This wait can be added on a test by test basis now that the function is being exported.
* Remove wait for anti-entropy sync. This is built into the existing WaitForSerfCheck func, so that can be used if the anti-entropy wait is needed
* Fix some tests that I broke when refactoring the ConfigSnapshot
* Make sure the MeshGateway config is added to all the right api structs
* Fix some more tests
If a KVSet is performed but does not update the entry, do not trigger
watches for this key.
This avoids releasing blocking queries for KV values that did not
actually changed.
This allows addresses to be tagged at the service level similar to what we allow for nodes already. The address translation that can be enabled with the `translate_wan_addrs` config was updated to take these new addresses into account as well.
* Add integration test for central config; fix central config WIP
* Add integration test for central config; fix central config WIP
* Set proxy protocol correctly and begin adding upstream support
* Add upstreams to service config cache key and start new notify watcher if they change.
This doesn't update the tests to pass though.
* Fix some merging logic get things working manually with a hack (TODO fix properly)
* Simplification to not allow enabling sidecars centrally - it makes no sense without upstreams anyway
* Test compile again and obvious ones pass. Lots of failures locally not debugged yet but may be flakes. Pushing up to see what CI does
* Fix up service manageer and API test failures
* Remove the enable command since it no longer makes much sense without being able to turn on sidecar proxies centrally
* Remove version.go hack - will make integration test fail until release
* Remove unused code from commands and upstream merge
* Re-bump version to 1.5.0
* Add HTTP endpoints for config entry management
* Finish implementing decoding in the HTTP Config entry apply endpoint
* Add CAS operation to the config entry apply endpoint
Also use this for the bootstrapping and move the config entry decoding function into the structs package.
* First pass at the API client for the config entries
* Fixup some of the ConfigEntry APIs
Return a singular response object instead of a list for the ConfigEntry.Get RPC. This gets plumbed through the HTTP API as well.
Dont return QueryMeta in the JSON response for the config entry listing HTTP API. Instead just return a list of config entries.
* Minor API client fixes
* Attempt at some ConfigEntry api client tests
These don’t currently work due to weak typing in JSON
* Get some of the api client tests passing
* Implement reflectwalk magic to correct JSON encoding a ProxyConfigEntry
Also added a test for the HTTP endpoint that exposes the problem. However, since the test doesn’t actually do the JSON encode/decode its still failing.
* Move MapWalk magic into a binary marshaller instead of JSON.
* Add a MapWalk test
* Get rid of unused func
* Get rid of unused imports
* Fixup some tests now that the decoding from msgpack coerces things into json compat types
* Stub out most of the central config cli
Fully implement the config read command.
* Basic config delete command implementation
* Implement config write command
* Implement config list subcommand
Not entirely sure about the output here. Its basically the read output indented with a line specifying the kind/name of each type which is also duplicated in the indented output.
* Update command usage
* Update some help usage formatting
* Add the connect enable helper cli command
* Update list command output
* Rename the config entry API client methods.
* Use renamed apis
* Implement config write tests
Stub the others with the noTabs tests.
* Change list output format
Now just simply output 1 line per named config
* Add config read tests
* Add invalid args write test.
* Add config delete tests
* Add config list tests
* Add connect enable tests
* Update some CLI commands to use CAS ops
This also modifies the HTTP API for a write op to return a boolean indicating whether the value was written or not.
* Fix up the HTTP API CAS tests as I realized they weren’t testing what they should.
* Update config entry rpc tests to properly test CAS
* Fix up a few more tests
* Fix some tests that using ConfigEntries.Apply
* Update config_write_test.go
* Get rid of unused import
Roles are named and can express the same bundle of permissions that can
currently be assigned to a Token (lists of Policies and Service
Identities). The difference with a Role is that it not itself a bearer
token, but just another entity that can be tied to a Token.
This lets an operator potentially curate a set of smaller reusable
Policies and compose them together into reusable Roles, rather than
always exploding that same list of Policies on any Token that needs
similar permissions.
This also refactors the acl replication code to be semi-generic to avoid
3x copypasta.
This is mainly to avoid having the API return "0001-01-01T00:00:00Z" as
a value for the ExpirationTime field when it is not set. Unfortunately
time.Time doesn't respect the json marshalling "omitempty" directive.
These act like a special cased version of a Policy Template for granting
a token the privileges necessary to register a service and its connect
proxy, and read upstreams from the catalog.
* Move the watch package into the api module
It was already just a thin wrapper around the API anyways. The biggest change was to the testing. Instead of using a test agent directly from the agent package it now uses the binary on the PATH just like the other API tests.
The other big changes were to fix up the connect based watch tests so that we didn’t need to pull in the connect package (and therefore all of Consul)
Fixes: #4222
# Data Filtering
This PR will implement filtering for the following endpoints:
## Supported HTTP Endpoints
- `/agent/checks`
- `/agent/services`
- `/catalog/nodes`
- `/catalog/service/:service`
- `/catalog/connect/:service`
- `/catalog/node/:node`
- `/health/node/:node`
- `/health/checks/:service`
- `/health/service/:service`
- `/health/connect/:service`
- `/health/state/:state`
- `/internal/ui/nodes`
- `/internal/ui/services`
More can be added going forward and any endpoint which is used to list some data is a good candidate.
## Usage
When using the HTTP API a `filter` query parameter can be used to pass a filter expression to Consul. Filter Expressions take the general form of:
```
<selector> == <value>
<selector> != <value>
<value> in <selector>
<value> not in <selector>
<selector> contains <value>
<selector> not contains <value>
<selector> is empty
<selector> is not empty
not <other expression>
<expression 1> and <expression 2>
<expression 1> or <expression 2>
```
Normal boolean logic and precedence is supported. All of the actual filtering and evaluation logic is coming from the [go-bexpr](https://github.com/hashicorp/go-bexpr) library
## Other changes
Adding the `Internal.ServiceDump` RPC endpoint. This will allow the UI to filter services better.
* First conversion
* Use serf 0.8.2 tag and associated updated deps
* * Move freeport and testutil into internal/
* Make internal/ its own module
* Update imports
* Add replace statements so API and normal Consul code are
self-referencing for ease of development
* Adapt to newer goe/values
* Bump to new cleanhttp
* Fix ban nonprintable chars test
* Update lock bad args test
The error message when the duration cannot be parsed changed in Go 1.12
(ae0c435877d3aacb9af5e706c40f9dddde5d3e67). This updates that test.
* Update another test as well
* Bump travis
* Bump circleci
* Bump go-discover and godo to get rid of launchpad dep
* Bump dockerfile go version
* fix tar command
* Bump go-cleanhttp
This PR adds two features which will be useful for operators when ACLs are in use.
1. Tokens set in configuration files are now reloadable.
2. If `acl.enable_token_persistence` is set to `true` in the configuration, tokens set via the `v1/agent/token` endpoint are now persisted to disk and loaded when the agent starts (or during configuration reload)
Note that token persistence is opt-in so our users who do not want tokens on the local disk will see no change.
Some other secondary changes:
* Refactored a bunch of places where the replication token is retrieved from the token store. This token isn't just for replicating ACLs and now it is named accordingly.
* Allowed better paths in the `v1/agent/token/` API. Instead of paths like: `v1/agent/token/acl_replication_token` the path can now be just `v1/agent/token/replication`. The old paths remain to be valid.
* Added a couple new API functions to set tokens via the new paths. Deprecated the old ones and pointed to the new names. The names are also generally better and don't imply that what you are setting is for ACLs but rather are setting ACL tokens. There is a minor semantic difference there especially for the replication token as again, its no longer used only for ACL token/policy replication. The new functions will detect 404s and fallback to using the older token paths when talking to pre-1.4.3 agents.
* Docs updated to reflect the API additions and to show using the new endpoints.
* Updated the ACL CLI set-agent-tokens command to use the non-deprecated APIs.
Given a query like:
```
{
"Name": "tagged-connect-query",
"Service": {
"Service": "foo",
"Tags": ["tag"],
"Connect": true
}
}
```
And a Consul configuration like:
```
{
"services": [
"name": "foo",
"port": 8080,
"connect": { "sidecar_service": {} },
"tags": ["tag"]
]
}
```
If you executed the query it would always turn up with 0 results. This was because the sidecar service was being created without any tags. You could instead make your config look like:
```
{
"services": [
"name": "foo",
"port": 8080,
"connect": { "sidecar_service": {
"tags": ["tag"]
} },
"tags": ["tag"]
]
}
```
However that is a bit redundant for most cases. This PR ensures that the tags and service meta of the parent service get copied to the sidecar service. If there are any tags or service meta set in the sidecar service definition then this copying does not take place. After the changes, the query will now return the expected results.
A second change was made to prepared queries in this PR which is to allow filtering on ServiceMeta just like we allow for filtering on NodeMeta.
* Support rate limiting and concurrency limiting CSR requests on servers; handle CA rotations gracefully with jitter and backoff-on-rate-limit in client
* Add CSR rate limiting docs
* Fix config naming and add tests for new CA configs
* Add default weights when adding a service with no weights to local state to prevent constant AE re-sync.
This fix was contributed by @42wim in https://github.com/hashicorp/consul/pull/5096 but was merged against the wrong base. This adds it to master and adds a test to cover the behaviour.
* Fix tests that broke due to comparing internal state which now has default weights
This endpoint aggregates all checks related to <service id> on the agent
and return an appropriate http code + the string describing the worst
check.
This allows to cleanly expose service status to other component, hiding
complexity of multiple checks.
This is especially useful to use consul to feed a load balancer which
would delegate health checking to consul agent.
Exposing this endpoint on the agent is necessary to avoid a hit on
consul servers and avoid decreasing resiliency (this endpoint will work
even if there is no consul leader in the cluster).
* Update the ACL API docs
* Add a CreateTime to the anon token
Also require acl:read permissions at least to perform rule translation. Don’t want someone DoSing the system with an open endpoint that actually does a bit of work.
* Fix one place where I was referring to id instead of AccessorID
* Add godocs for the API package additions.
* Minor updates: removed some extra commas and updated the acl intro paragraph
* minor tweaks
* Updated the language to be clearer
* Updated the language to be clearer for policy page
* I was also confused by that! Your updates are much clearer.
Co-Authored-By: kaitlincarter-hc <43049322+kaitlincarter-hc@users.noreply.github.com>
* Sounds much better.
Co-Authored-By: kaitlincarter-hc <43049322+kaitlincarter-hc@users.noreply.github.com>
* Updated sidebar layout and deprecated warning
* A few API mods and unit tests.
* Update the unit tests to verify query/write metadata and to fix the rules endpoint tests.
* Make sure the full information for the replication status is in the api packge
* Implement CLI token cloning & special ID handling
* Update a couple CLI commands to take some alternative options.
* Document the CLI.
* Update the policy list and set-agent-token synopsis
This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week.
Description
At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers.
On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though.
Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though.
All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management.
Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are:
A server running the new system must still support other clients using the legacy system.
A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system.
The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode.
So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
* agent/debug: add package for debugging, host info
* api: add v1/agent/host endpoint
* agent: add v1/agent/host endpoint
* command/debug: implementation of static capture
* command/debug: tests and only configured targets
* agent/debug: add basic test for host metrics
* command/debug: add methods for dynamic data capture
* api: add debug/pprof endpoints
* command/debug: add pprof
* command/debug: timing, wg, logs to disk
* vendor: add gopsutil/disk
* command/debug: add a usage section
* website: add docs for consul debug
* agent/host: require operator:read
* api/host: improve docs and no retry timing
* command/debug: fail on extra arguments
* command/debug: fixup file permissions to 0644
* command/debug: remove server flags
* command/debug: improve clarity of usage section
* api/debug: add Trace for profiling, fix profile
* command/debug: capture profile and trace at the same time
* command/debug: add index document
* command/debug: use "clusters" in place of members
* command/debug: remove address in output
* command/debug: improve comment on metrics sleep
* command/debug: clarify usage
* agent: always register pprof handlers and protect
This will allow us to avoid a restart of a target agent
for profiling by always registering the pprof handlers.
Given this is a potentially sensitive path, it is protected
with an operator:read ACL and enable debug being
set to true on the target agent. enable_debug still requires
a restart.
If ACLs are disabled, enable_debug is sufficient.
* command/debug: use trace.out instead of .prof
More in line with golang docs.
* agent: fix comment wording
* agent: wrap table driven tests in t.run()
* Support multiple tags for health and catalog api endpoints
Fixes#1781.
Adds a `ServiceTags` field to the ServiceSpecificRequest to support
multiple tags, updates the filter logic in the catalog store, and
propagates these change through to the health and catalog endpoints.
Note: Leaves `ServiceTag` in the struct, since it is being used as
part of the DNS lookup, which in turn uses the health check.
* Update the api package to support multiple tags
Includes additional tests.
* Update new tests to use the `require` library
* Update HealthConnect check after a bad merge
Play a trick with CLOEXEC to pass the envoy bootstrap configuration as
an open file descriptor to the exec'd envoy process. The file only
briefly touches disk before being unlinked.
We convince envoy to read from this open file descriptor by using the
/dev/fd/$FDNUMBER mechanism to read the open file descriptor as a file.
Because the filename no longer has an extension envoy's sniffing logic
falls back on JSON instead of YAML, so the bootstrap configuration must
be generated as JSON instead.
* Plumb xDS server and proxyxfg into the agent startup
* Add `consul connect envoy` command to allow running Envoy as a connect sidecar.
* Add test for help tabs; typos and style fixups from review
- A new endpoint `/v1/agent/service/:service_id` which is a generic way to look up the service for a single instance. The primary value here is that it:
- **supports hash-based blocking** and so;
- **replaces `/agent/connect/proxy/:proxy_id`** as the mechanism the built-in proxy uses to read its config.
- It's not proxy specific and so works for any service.
- It has a temporary shim to call through to the existing endpoint to preserve current managed proxy config defaulting behaviour until that is removed entirely (tested).
- The built-in proxy now uses the new endpoint exclusively for it's config
- The built-in proxy now has a `-sidecar-for` flag that allows the service ID of the _target_ service to be specified, on the condition that there is exactly one "sidecar" proxy (that is one that has `Proxy.DestinationServiceID` set) for the service registered.
- Several fixes for edge cases for SidecarService
- A fix for `Alias` checks - when running locally they didn't update their state until some external thing updated the target. If the target service has no checks registered as below, then the alias never made it past critical.
* Added new Config for SidecarService in ServiceDefinitions.
* WIP: all the code needed for SidecarService is written... none of it is tested other than config :). Need API updates too.
* Test coverage for the new sidecarServiceFromNodeService method.
* Test API registratrion with SidecarService
* Recursive Key Translation 🤦
* Add tests for nested sidecar defintion arrays to ensure they are translated correctly
* Use dedicated internal state rather than Service Meta for tracking sidecars for deregistration.
Add tests for deregistration.
* API struct for agent register. No other endpoint should be affected yet.
* Additional test cases to cover updates to API registrations
* Refactor Service Definition ProxyDestination.
This includes:
- Refactoring all internal structs used
- Updated tests for both deprecated and new input for:
- Agent Services endpoint response
- Agent Service endpoint response
- Agent Register endpoint
- Unmanaged deprecated field
- Unmanaged new fields
- Managed deprecated upstreams
- Managed new
- Catalog Register
- Unmanaged deprecated field
- Unmanaged new fields
- Managed deprecated upstreams
- Managed new
- Catalog Services endpoint response
- Catalog Node endpoint response
- Catalog Service endpoint response
- Updated API tests for all of the above too (both deprecated and new forms of register)
TODO:
- config package changes for on-disk service definitions
- proxy config endpoint
- built-in proxy support for new fields
* Agent proxy config endpoint updated with upstreams
* Config file changes for upstreams.
* Add upstream opaque config and update all tests to ensure it works everywhere.
* Built in proxy working with new Upstreams config
* Command fixes and deprecations
* Fix key translation, upstream type defaults and a spate of other subtele bugs found with ned to end test scripts...
TODO: tests still failing on one case that needs a fix. I think it's key translation for upstreams nested in Managed proxy struct.
* Fix translated keys in API registration.
≈
* Fixes from docs
- omit some empty undocumented fields in API
- Bring back ServiceProxyDestination in Catalog responses to not break backwards compat - this was removed assuming it was only used internally.
* Documentation updates for Upstreams in service definition
* Fixes for tests broken by many refactors.
* Enable travis on f-connect branch in this branch too.
* Add consistent Deprecation comments to ProxyDestination uses
* Update version number on deprecation notices, and correct upstream datacenter field with explanation in docs