* remove v2 tenancy, catalog, and mesh
- Inline the v2tenancy experiment to false
- Inline the resource-apis experiment to false
- Inline the hcp-v2-resource-apis experiment to false
- Remove ACL policy templates and rule language changes related to
workload identities (a v2-only concept) (e.g. identity and
identity_prefix)
- Update the gRPC endpoint used by consul-dataplane to no longer respond
specially for v2
- Remove stray v2 references scattered throughout the DNS v1.5 newer
implementation.
* changelog
* go mod tidy on consul containers
* lint fixes from ENT
---------
Co-authored-by: John Murret <john.murret@hashicorp.com>
- Low level plumbing for resources is still retained for now.
- Retain "Workload" terminology over "Service".
- Revert "Destination" terminology back to "Upstream".
- Remove TPROXY support as it only worked for v2.
security: upgrade vault/api to remove go-jose.v2
This dependency has an open vulnerability (GO-2024-2631), and is no
longer needed by the latest `vault/api`. This is a follow-up to the
upgrade of `go-jose/v3` in this repository to make all our dependencies
consolidate on v3.
Also remove the recently added security scan triage block for
GO-2024-2631, which was added due to incorrect reports that
`go-jose/v3@3.0.3` was impacted; in reality, is was this indirect
client dependency (not impacted by CVE) that the scanner was flagging. A
bug report has been filed to address the incorrect reporting.
This adds a bunch of coverage of the topology.Compile method. It is not complete, but it is a start.
- A few panics and miscellany were fixed.
- The testing/deployer tests are now also run in CI.
NET-6653 Enabling container logs when initConsulServers fails and return an error (container logs during launch() failure)
On branch NET-6653
modified: testing/deployer/sprawl/boot.go
[OG Author: michael.zalimeni@hashicorp.com, rebase needed a separate PR]
* v2: support virtual port in Service port references
In addition to Service target port references, allow users to specify a
port by stringified virtual port value. This is useful in environments
such as Kubernetes where typical configuration is written in terms of
Service virtual ports rather than workload (pod) target port names.
Retaining the option of referencing target ports by name supports VMs,
Nomad, and other use cases where virtual ports are not used by default.
To support both uses cases at once, we will strictly interpret port
references based on whether the value is numeric. See updated
`ServicePort` docs for more details.
* v2: update service ref docs for virtual port support
Update proto and generated .go files with docs reflecting virtual port
reference support.
* v2: add virtual port references to L7 topo test
Add coverage for mixed virtual and target port references to existing
test.
* update failover policy controller tests to work with computed failover policy and assert error conditions against FailoverPolicy and ComputedFailoverPolicy resources
* accumulate services; don't overwrite them in enterprise
---------
Co-authored-by: Michael Zalimeni <michael.zalimeni@hashicorp.com>
Co-authored-by: R.B. Boyer <rb@hashicorp.com>
The docker image used in CICD was referencing `registry.k8s.io/pause:3.3`,
which appears to no longer function correctly. This commit swaps over to a
Hashicorp mirrored image that shouldn't have rate limits or disappearing
images.
* Implement In-Process gRPC for use by controller caching/indexing
This replaces the pipe base listener implementation we were previously using. The new style CAN avoid cloning resources which our controller caching/indexing is taking advantage of to not duplicate resource objects in memory.
To maintain safety for controllers and for them to be able to modify data they get back from the cache and the resource service, the client they are presented in their runtime will be wrapped with an autogenerated client which clones request and response messages as they pass through the client.
Another sizable change in this PR is to consolidate how server specific gRPC services get registered and managed. Before this was in a bunch of different methods and it was difficult to track down how gRPC services were registered. Now its all in one place.
* Fix race in tests
* Ensure the resource service is registered to the multiplexed handler for forwarding from client agents
* Expose peer streaming on the internal handler
* add build tags/import k8s specific proto packages
* fix generated import paths
* fix gomod linting issue
* mod tidy every go mod file
* revert protobuff version, take care of in different pr
* cleaned up new lines
* added newline to end of file
Conceptually renaming the following topology terms to avoid confusion with v2 and to better align with it:
- ServiceID -> ID
- Service -> Workload
- Upstream -> Destination
This should align between CE ef35525 and ENT 7f95226dbe40151c8f17dd4464784b60cf358dc1 in:
- testing/integration/consul-container
- test-integ
- testing/deployer
This updates the testing/deployer (aka "topology test") framework to allow for a
v2-oriented topology to opt services into enabling TransparentProxy. The restrictions
are similar to that of #19046
The multiport Ports map that was added in #19046 was changed to allow for the
protocol to be specified at this time, but for now the only supported protocol is TCP
as only L4 functions currently on main.
As part of making transparent proxy work, the DNS server needed a new zonefile
for responding to virtual.consul requests, since there is no Kubernetes DNS and
the Consul DNS work for v2 has not happened yet. Once Consul DNS supports v2 we should switch over. For now the format of queries is:
<service>--<namespace>--<partition>.virtual.consul
Additionally:
- All transparent proxy enabled services are assigned a virtual ip in the 10.244.0/24
range. This is something Consul will do in v2 at a later date, likely during 1.18.
- All services with exposed ports (non-mesh) are assigned a virtual port number for use
with tproxy
- The consul-dataplane image has been made un-distroless, and gotten the necessary
tools to execute consul connect redirect-traffic before running dataplane, thus simulating
a kubernetes init container in plain docker.
This updates the testing/deployer (aka "topology test") framework to conditionally
configure and launch catalog constructs using v2 resources. This is controlled via a
Version field on the Node construct in a topology.Config. This only functions for a
dataplane type and has other restrictions that match the rest of v2 (no peering, no
wanfed, no mesh gateways).
Like config entries, you can statically provide a set of initial resources to be synced
when bringing up the cluster (beyond those that are generated for you such as
workloads, services, etc).
If you want to author a test that can be freely converted between v1 and v2 then that
is possible. If you switch to the multi-port definition on a topology.Service (aka
"workload/instance") then that makes v1 ineligible.
This also adds a starter set of "on every PR" integration tests for single and multiport
under test-integ/catalogv2
* dns token
fix whitespace for docs and comments
fix test cases
fix test cases
remove tabs in help text
Add changelog
Peering dns test
Peering dns test
Partial implementation of Peered DNS test
Swap to new topology lib
expose dns port for integration tests on client
remove partial test implementation
remove extra port exposure
remove changelog from the ent pr
Add dns token to set-agent-token switch
Add enterprise golden file
Use builtin/dns template in tests
Update ent dns policy
Update ent dns template test
remove local gen certs
fix templated policy specs
* add changelog
* go mod tidy