* add namespace proto and registration
* fix proto generation
* add missing copywrite headers
* fix proto linter errors
* fix exports and Type export
* add mutate hook and more validation
* add more validation rules and tests
* Apply suggestions from code review
Co-authored-by: Semir Patel <semir.patel@hashicorp.com>
* fix owner error and add test
* remove ACL for now
* add tests around space suffix prefix.
* only fait when ns and ap are default, add test for it
---------
Co-authored-by: Semir Patel <semir.patel@hashicorp.com>
Ensure that configuring a FailoverPolicy for a service that is reachable via a xRoute or a direct upstream causes an envoy aggregate cluster to be created for the original cluster name, but with separate clusters for each one of the possible destinations.
Adding coauthors who mobbed/paired at various points throughout last week.
Co-authored-by: Dan Stough <dan.stough@hashicorp.com>
Co-authored-by: Iryna Shustava <iryna@hashicorp.com>
Co-authored-by: John Murret <john.murret@hashicorp.com>
Co-authored-by: Michael Zalimeni <michael.zalimeni@hashicorp.com>
Co-authored-by: Ashwin Venkatesh <ashwin@hashicorp.com>
Co-authored-by: Michael Wilkerson <mwilkerson@hashicorp.com>
Previously, when using implicit upstreams, we'd build outbound listener per destination instead of one for all destinations. This will result in port conflicts when trying to send this config to envoy.
This PR also makes sure that leaf and root references are always added (before we would only add it if there are inbound non-mesh ports).
Also, black-hole traffic when there are no inbound ports other than mesh
HTTPRoute, GRPCRoute, TCPRoute, and Upstreams resources contain inner
Reference fields. We want to ensure that components of those reference Tenancy
fields left unspecified are defaulted using the tenancy of the enclosing resource.
As the underlying helper being used to do the normalization calls the function
modified in #18822, it also means that the PeerName field will be set to "local" for
now automatically to avoid "local" != "" issues downstream.
FailoverPolicy resources contain inner Reference fields. We want to ensure
that components of those reference Tenancy fields left unspecified are defaulted
using the tenancy of the enclosing FailoverPolicy resource.
As the underlying helper being used to do the normalization calls the function
modified in #18822, it also means that the PeerName field will be set to "local" for
now automatically to avoid "local" != "" issues downstream.
Reworks the sidecar controller to accept ComputedRoutes as an input and use it to generate appropriate ProxyStateTemplate resources containing L4/L7 mesh configuration.
When one resource contains an inner field that is of type *pbresource.Reference we want the
Tenancy to be reasonably defaulted by the following rules:
1. The final values will be limited by the scope of the referenced type.
2. Values will be inferred from the parent's tenancy, and if that is insufficient then using
the default tenancy for the type's scope.
3. Namespace will only be used from a parent if the reference and the parent share a
partition, otherwise the default namespace will be used.
Until we tackle peering, this hard codes an assumption of peer name being local. The
logic for defaulting may need adjustment when that is addressed.
* Refactors the leafcert package to not have a dependency on agent/consul and agent/cache to avoid import cycles. This way the xds controller can just import the leafcert package to use the leafcert manager.
The leaf cert logic in the controller:
* Sets up watches for leaf certs that are referenced in the ProxyStateTemplate (which generates the leaf certs too).
* Gets the leaf cert from the leaf cert cache
* Stores the leaf cert in the ProxyState that's pushed to xds
* For the cert watches, this PR also uses a bimapper + a thin wrapper to map leaf cert events to related ProxyStateTemplates
Since bimapper uses a resource.Reference or resource.ID to map between two resource types, I've created an internal type for a leaf certificate to use for the resource.Reference, since it's not a v2 resource.
The wrapper allows mapping events to resources (as opposed to mapping resources to resources)
The controller tests:
Unit: Ensure that we resolve leaf cert references
Lifecycle: Ensure that when the CA is updated, the leaf cert is as well
Also adds a new spiffe id type, and adds workload identity and workload identity URI to leaf certs. This is so certs are generated with the new workload identity based SPIFFE id.
* Pulls out some leaf cert test helpers into a helpers file so it
can be used in the xds controller tests.
* Wires up leaf cert manager dependency
* Support getting token from proxytracker
* Add workload identity spiffe id type to the authorize and sign functions
---------
Co-authored-by: John Murret <john.murret@hashicorp.com>
This new controller produces an intermediate output (ComputedRoutes) that is meant to summarize all relevant xRoutes and related mesh configuration in an easier-to-use format for downstream use to construct the ProxyStateTemplate.
It also applies status updates to the xRoute resource types to indicate that they are themselves semantically valid inputs.
* mesh-controller: handle L4 protocols for a proxy without upstreams
* sidecar-controller: Support explicit destinations for L4 protocols and single ports.
* This controller generates and saves ProxyStateTemplate for sidecar proxies.
* It currently supports single-port L4 ports only.
* It keeps a cache of all destinations to make it easier to compute and retrieve destinations.
* It will update the status of the pbmesh.Upstreams resource if anything is invalid.
* endpoints-controller: add workload identity to the service endpoints resource
* small fixes
* review comments
* Address PR comments
* sidecar-proxy controller: Add support for transparent proxy
This currently does not support inferring destinations from intentions.
* PR review comments
* mesh-controller: handle L4 protocols for a proxy without upstreams
* sidecar-controller: Support explicit destinations for L4 protocols and single ports.
* This controller generates and saves ProxyStateTemplate for sidecar proxies.
* It currently supports single-port L4 ports only.
* It keeps a cache of all destinations to make it easier to compute and retrieve destinations.
* It will update the status of the pbmesh.Upstreams resource if anything is invalid.
* endpoints-controller: add workload identity to the service endpoints resource
* small fixes
* review comments
* Make sure endpoint refs route to mesh port instead of an app port
* Address PR comments
* fixing copyright
* tidy imports
* sidecar-proxy controller: Add support for transparent proxy
This currently does not support inferring destinations from intentions.
* tidy imports
* add copyright headers
* Prefix sidecar proxy test files with source and destination.
* Update controller_test.go
* NET-5132 - Configure multiport routing for connect proxies in TProxy mode
* formatting golden files
* reverting golden files and adding changes in manually. build implicit destinations still has some issues.
* fixing files that were incorrectly repeating the outbound listener
* PR comments
* extract AlpnProtocol naming convention to getAlpnProtocolFromPortName(portName)
* removing address level filtering.
* adding license to resources_test.go
---------
Co-authored-by: Iryna Shustava <iryna@hashicorp.com>
Co-authored-by: R.B. Boyer <rb@hashicorp.com>
Co-authored-by: github-team-consul-core <github-team-consul-core@hashicorp.com>
* mesh-controller: handle L4 protocols for a proxy without upstreams
* sidecar-controller: Support explicit destinations for L4 protocols and single ports.
* This controller generates and saves ProxyStateTemplate for sidecar proxies.
* It currently supports single-port L4 ports only.
* It keeps a cache of all destinations to make it easier to compute and retrieve destinations.
* It will update the status of the pbmesh.Upstreams resource if anything is invalid.
* endpoints-controller: add workload identity to the service endpoints resource
* small fixes
* review comments
* Address PR comments
* sidecar-proxy controller: Add support for transparent proxy
This currently does not support inferring destinations from intentions.
* PR review comments
* mesh-controller: handle L4 protocols for a proxy without upstreams
* sidecar-controller: Support explicit destinations for L4 protocols and single ports.
* This controller generates and saves ProxyStateTemplate for sidecar proxies.
* It currently supports single-port L4 ports only.
* It keeps a cache of all destinations to make it easier to compute and retrieve destinations.
* It will update the status of the pbmesh.Upstreams resource if anything is invalid.
* endpoints-controller: add workload identity to the service endpoints resource
* small fixes
* review comments
* Make sure endpoint refs route to mesh port instead of an app port
* Address PR comments
* fixing copyright
* tidy imports
* sidecar-proxy controller: Add support for transparent proxy
This currently does not support inferring destinations from intentions.
* tidy imports
* add copyright headers
* Prefix sidecar proxy test files with source and destination.
* Update controller_test.go
---------
Co-authored-by: Iryna Shustava <iryna@hashicorp.com>
Co-authored-by: R.B. Boyer <rb@hashicorp.com>
Co-authored-by: github-team-consul-core <github-team-consul-core@hashicorp.com>
This commit adds support for transparent proxy to the sidecar proxy controller. As we do not yet support inferring destinations from intentions, this assumes that all services in the cluster are destinations.
* This controller generates and saves ProxyStateTemplate for sidecar proxies.
* It currently supports single-port L4 ports only.
* It keeps a cache of all destinations to make it easier to compute and retrieve destinations.
* It will update the status of the pbmesh.Upstreams resource if anything is invalid.
* This commit also changes service endpoints to include workload identity. This made the implementation a bit easier as we don't need to look up as many workloads and instead rely on endpoints data.
We explicitly enumerate the allowed protocols in validation, so this
change is necessary to use the new enum value.
Also add tests for enum validators to ensure they stay aligned to
protos unless we explicitly want them to diverge.
* Initial protohcl implementation
Co-authored-by: Matt Keeler <mkeeler@users.noreply.github.com>
Co-authored-by: Daniel Upton <daniel@floppy.co>
* resourcehcl: implement resource decoding on top of protohcl
Co-authored-by: Daniel Upton <daniel@floppy.co>
* fix: resolve ci failures
* test: add additional unmarshalling tests
* refactor: update function test to clean protohcl package imports
---------
Co-authored-by: Matt Keeler <mkeeler@users.noreply.github.com>
Co-authored-by: Daniel Upton <daniel@floppy.co>
* Adding explicit MPL license for sub-package
This directory and its subdirectories (packages) contain files licensed with the MPLv2 `LICENSE` file in this directory and are intentionally licensed separately from the BSL `LICENSE` file at the root of this repository.
* Adding explicit MPL license for sub-package
This directory and its subdirectories (packages) contain files licensed with the MPLv2 `LICENSE` file in this directory and are intentionally licensed separately from the BSL `LICENSE` file at the root of this repository.
* Updating the license from MPL to Business Source License
Going forward, this project will be licensed under the Business Source License v1.1. Please see our blog post for more details at <Blog URL>, FAQ at www.hashicorp.com/licensing-faq, and details of the license at www.hashicorp.com/bsl.
* add missing license headers
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
* Update copyright file headers to BUSL-1.1
---------
Co-authored-by: hashicorp-copywrite[bot] <110428419+hashicorp-copywrite[bot]@users.noreply.github.com>
* Add ServiceEndpoints Mutation hook tests
* Move endpoint owner validation into the validation hook
Also there were some minor changes to error validation to account for go-cmp not liking to peer through an errors.errorstring type that get created by errors.New
Also, change the ProxyState.id to identity. This is because we already have the id of this proxy
from the resource, and this id should be name-aligned with the workload it represents. It should
also have the owner ref set to the workload ID if we need that. And so the id field seems unnecessary.
We do, however, need a reference to workload identity so that we can authorize the proxy when it initially
connects to the xDS server.
This is a bit of a grab bag of helpers that I found useful for working with them when authoring substantial Controllers. Subsequent PRs will make use of them.
Configuration that previously was inlined into the Upstreams resource
applies to both explicit and implicit upstreams and so it makes sense to
split it out into its own resource.
It also has other minor changes:
- Renames `proxy.proto` proxy_configuration.proto`
- Changes the type of `Upstream.destination_ref` from `pbresource.ID` to
`pbresource.Reference`
- Adds comments to fields that didn't have them
For consistency, resource type names must follow these rules:
- `Group` must be snake case, and in most cases a single word.
- `GroupVersion` must be lowercase, start with a "v" and end with a number.
- `Kind` must be pascal case.
These were chosen because they map to our protobuf type naming
conventions.
* Implement the Catalog V2 controller integration container tests
This now allows the container tests to import things from the root module. However for now we want to be very restrictive about which packages we allow importing.
* Add an upgrade test for the new catalog
Currently this should be dormant and not executed. However its put in place to detect breaking changes in the future and show an example of how to do an upgrade test with integration tests structured like catalog v2.
* Make testutil.Retry capable of performing cleanup operations
These cleanup operations are executed after each retry attempt.
* Move TestContext to taking an interface instead of a concrete testing.T
This allows this to be used on a retry.R or generally anything that meets the interface.
* Move to using TestContext instead of background contexts
Also this forces all test methods to implement the Cleanup method now instead of that being an optional interface.
Co-authored-by: Daniel Upton <daniel@floppy.co>
* Implement a Catalog Controllers Lifecycle Integration Test
* Prevent triggering the race detector.
This allows defining some variables for protobuf constants and using those in comparisons. Without that, something internal in the fmt package ended up looking at the protobuf message size cache and triggering the race detector.
* Add a ReplaceType dep mapper and move them into their own file
* Implement the service endpoints controller
* Implement a Catalog Controllers Integration Test
TLDR with many modules the versions included in each diverged quite a bit. Attempting to use Go Workspaces produces a bunch of errors.
This commit:
1. Fixes envoy-library-references.sh to work again
2. Ensures we are pulling in go-control-plane@v0.11.0 everywhere (previously it was at that version in some modules and others were much older)
3. Remove one usage of golang/protobuf that caused us to have a direct dependency on it.
4. Remove deprecated usage of the Endpoint field in the grpc resolver.Target struct. The current version of grpc (v1.55.0) has removed that field and recommended replacement with URL.Opaque and calls to the Endpoint() func when needing to consume the previous field.
4. `go work init <all the paths to go.mod files>` && `go work sync`. This syncrhonized versions of dependencies from the main workspace/root module to all submodules
5. Updated .gitignore to ignore the go.work and go.work.sum files. This seems to be standard practice at the moment.
6. Update doc comments in protoc-gen-consul-rate-limit to be go fmt compatible
7. Upgraded makefile infra to perform linting, testing and go mod tidy on all modules in a flexible manner.
8. Updated linter rules to prevent usage of golang/protobuf
9. Updated a leader peering test to account for an extra colon in a grpc error message.
* Fix straggler from renaming Register->RegisterTypes
* somehow a lint failure got through previously
* Fix lint-consul-retry errors
* adding in fix for success jobs getting skipped. (#17132)
* Temporarily disable inmem backend conformance test to get green pipeline
* Another test needs disabling
---------
Co-authored-by: John Murret <john.murret@hashicorp.com>
Introduces `storage.Backend`, which will serve as the interface between the
Resource Service and the underlying storage system (Raft today, but in the
future, who knows!).
The primary design goal of this interface is to keep its surface area small,
and push as much functionality as possible into the layers above, so that new
implementations can be added with little effort, and easily proven to be
correct. To that end, we also provide a suite of "conformance" tests that can
be run against a backend implementation to check it behaves correctly.
In this commit, we introduce an initial in-memory storage backend, which is
suitable for tests and when running Consul in development mode. This backend is
a thin wrapper around the `Store` type, which implements a resource database
using go-memdb and our internal pub/sub system. `Store` will also be used to
handle reads in our Raft backend, and in the future, used as a local cache for
external storage systems.
Protobuf Refactoring for Multi-Module Cleanliness
This commit includes the following:
Moves all packages that were within proto/ to proto/private
Rewrites imports to account for the packages being moved
Adds in buf.work.yaml to enable buf workspaces
Names the proto-public buf module so that we can override the Go package imports within proto/buf.yaml
Bumps the buf version dependency to 1.14.0 (I was trying out the version to see if it would get around an issue - it didn't but it also doesn't break things and it seemed best to keep up with the toolchain changes)
Why:
In the future we will need to consume other protobuf dependencies such as the Google HTTP annotations for openapi generation or grpc-gateway usage.
There were some recent changes to have our own ratelimiting annotations.
The two combined were not working when I was trying to use them together (attempting to rebase another branch)
Buf workspaces should be the solution to the problem
Buf workspaces means that each module will have generated Go code that embeds proto file names relative to the proto dir and not the top level repo root.
This resulted in proto file name conflicts in the Go global protobuf type registry.
The solution to that was to add in a private/ directory into the path within the proto/ directory.
That then required rewriting all the imports.
Is this safe?
AFAICT yes
The gRPC wire protocol doesn't seem to care about the proto file names (although the Go grpc code does tack on the proto file name as Metadata in the ServiceDesc)
Other than imports, there were no changes to any generated code as a result of this.
* Protobuf Modernization
Remove direct usage of golang/protobuf in favor of google.golang.org/protobuf
Marshallers (protobuf and json) needed some changes to account for different APIs.
Moved to using the google.golang.org/protobuf/types/known/* for the well known types including replacing some custom Struct manipulation with whats available in the structpb well known type package.
This also updates our devtools script to install protoc-gen-go from the right location so that files it generates conform to the correct interfaces.
* Fix go-mod-tidy make target to work on all modules
Adds automation for generating the map of `gRPC Method Name → Rate Limit Type`
used by the middleware introduced in #15550, and will ensure we don't forget
to add new endpoints.
Engineers must annotate their RPCs in the proto file like so:
```
rpc Foo(FooRequest) returns (FooResponse) {
option (consul.internal.ratelimit.spec) = {
operation_type: READ,
};
}
```
When they run `make proto` a protoc plugin `protoc-gen-consul-rate-limit` will
be installed that writes rate-limit specs as a JSON array to a file called
`.ratelimit.tmp` (one per protobuf package/directory).
After running Buf, `make proto` will execute a post-process script that will
ingest all of the `.ratelimit.tmp` files and generate a Go file containing the
mappings in the `agent/grpc-middleware` package. In the enterprise repository,
it will write an additional file with the enterprise-only endpoints.
If an engineer forgets to add the annotation to a new RPC, the plugin will
return an error like so:
```
RPC Foo is missing rate-limit specification, fix it with:
import "proto-public/annotations/ratelimit/ratelimit.proto";
service Bar {
rpc Foo(...) returns (...) {
option (hashicorp.consul.internal.ratelimit.spec) = {
operation_type: OPERATION_READ | OPERATION_WRITE | OPERATION_EXEMPT,
};
}
}
```
In the future, this annotation can be extended to support rate-limit
category (e.g. KV vs Catalog) and to determine the retry policy.
* update go version to 1.18 for api and sdk, go mod tidy
* removes ioutil usage everywhere which was deprecated in go1.16 in favour of io and os packages. Also introduces a lint rule which forbids use of ioutil going forward.
Co-authored-by: R.B. Boyer <4903+rboyer@users.noreply.github.com>
Fix an issue where rpc_hold_timeout was being used as the timeout for non-blocking queries. Users should be able to tune read timeouts without fiddling with rpc_hold_timeout. A new configuration `rpc_read_timeout` is created.
Refactor some implementation from the original PR 11500 to remove the misleading linkage between RPCInfo's timeout (used to retry in case of certain modes of failures) and the client RPC timeouts.
Adds a timeout (deadline) to client RPC calls, so that streams will no longer hang indefinitely in unstable network conditions.
Co-authored-by: kisunji <ckim@hashicorp.com>
This adds an aws-iam auth method type which supports authenticating to Consul using AWS IAM identities.
Co-authored-by: R.B. Boyer <4903+rboyer@users.noreply.github.com>
Previously we believe it was necessary for all code that required ports
to use freeport to prevent conflicts.
https://github.com/dnephin/freeport-test shows that it is actually save
to use port 0 (`127.0.0.1:0`) as long as it is passed directly to
`net.Listen`, and the listener holds the port for as long as it is
needed.
This works because freeport explicitly avoids the ephemeral port range,
and port 0 always uses that range. As you can see from the test output
of https://github.com/dnephin/freeport-test, the two systems never use
overlapping ports.
This commit converts all uses of freeport that were being passed
directly to a net.Listen to use port 0 instead. This allows us to remove
a bit of wrapping we had around httptest, in a couple places.
Add a skip condition to all tests slower than 100ms.
This change was made using `gotestsum tool slowest` with data from the
last 3 CI runs of master.
See https://github.com/gotestyourself/gotestsum#finding-and-skipping-slow-tests
With this change:
```
$ time go test -count=1 -short ./agent
ok github.com/hashicorp/consul/agent 0.743s
real 0m4.791s
$ time go test -count=1 -short ./agent/consul
ok github.com/hashicorp/consul/agent/consul 4.229s
real 0m8.769s
```
Right now this is only hooked into the insecure RPC server and requires JWT authorization. If no JWT authorizer is setup in the configuration then we inject a disabled “authorizer” to always report that JWT authorization is disabled.
And fix the 'value not used' issues.
Many of these are not bugs, but a few are tests not checking errors, and
one appears to be a missed error in non-test code.
* First conversion
* Use serf 0.8.2 tag and associated updated deps
* * Move freeport and testutil into internal/
* Make internal/ its own module
* Update imports
* Add replace statements so API and normal Consul code are
self-referencing for ease of development
* Adapt to newer goe/values
* Bump to new cleanhttp
* Fix ban nonprintable chars test
* Update lock bad args test
The error message when the duration cannot be parsed changed in Go 1.12
(ae0c435877d3aacb9af5e706c40f9dddde5d3e67). This updates that test.
* Update another test as well
* Bump travis
* Bump circleci
* Bump go-discover and godo to get rid of launchpad dep
* Bump dockerfile go version
* fix tar command
* Bump go-cleanhttp