Static DNS lookups, in addition to explicitly targeting a datacenter,
can target a cluster peer. This was added in 95dc0c7b30 but didn't make the documentation.
The driving function for the change is `parseLocality` here: 0b1299c28d/agent/dns_oss.go (L25)
The biggest change in this is to adjust the standard lookup syntax to tie
`.<datacenter>` to `.dc` as required-together, and to append in the similar `.<cluster-peer>.peer` optional argument, both to A record and SRV record lookups.
Co-authored-by: David Yu <dyu@hashicorp.com>
* Propose new changes to APIgw upgrade instructions
* fix build error
* update callouts to render correctly
* Add hideClipboard to log messages
* Added clarification around consul k8s and crds
Update CA provider docs
Clarify that providers can differ between
primary and secondary datacenters
Provide a comparison chart for consul vs
vault CA providers
Loosen Vault CA provider validation for RootPKIPath
Update Vault CA provider documentation
* Fix formatting for webhook-certs Consul tutorial
* Make a small grammar change to also pick up whitespace changes necessary for formatting
---------
Co-authored-by: David Yu <dyu@hashicorp.com>
- Provide more realistics examples for setting properties not already
supported natively by Consul
- Remove superfluous commas from HCL, correct target service name, and
fix service defaults vs. proxy defaults in examples
- Align existing integration test to updated docs
* add enterprise notes for IP-based rate limits
* Apply suggestions from code review
Co-authored-by: Tu Nguyen <im2nguyen@users.noreply.github.com>
Co-authored-by: David Yu <dyu@hashicorp.com>
* added bolded 'Enterprise' in list items.
---------
Co-authored-by: Tu Nguyen <im2nguyen@users.noreply.github.com>
Co-authored-by: David Yu <dyu@hashicorp.com>
* additional feedback
* Update website/content/docs/api-gateway/upgrades.mdx
Co-authored-by: Jeff Apple <79924108+Jeff-Apple@users.noreply.github.com>
---------
Co-authored-by: Jeff Apple <79924108+Jeff-Apple@users.noreply.github.com>
* trimmed CRD step and reqs from installation
* updated tech specs
* Apply suggestions from code review
Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com>
Co-authored-by: Jeff Apple <79924108+Jeff-Apple@users.noreply.github.com>
* added upgrade instruction
* removed tcp port req
* described downtime and DT-less upgrades
* applied additional review feedback
---------
Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com>
Co-authored-by: Jeff Apple <79924108+Jeff-Apple@users.noreply.github.com>
* porting over changes from enterprise repo to oss
* applied feedback on service mesh for k8s overview
* fixed typo
* removed ent-only build script file
* Apply suggestions from code review
Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com>
* Apply suggestions from code review
Co-authored-by: David Yu <dyu@hashicorp.com>
Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com>
---------
Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com>
Co-authored-by: David Yu <dyu@hashicorp.com>
* add docs for consul-k8s config read command
This PR adds documentation for the functionality introduced in
https://github.com/hashicorp/consul-k8s/pull/2078.
* add output
---------
Co-authored-by: David Yu <dyu@hashicorp.com>
Fix ACL check on health endpoint
Prior to this change, the service health API would not explicitly return an
error whenever a token with invalid permissions was given, and it would instead
return empty results. With this change, a "Permission denied" error is returned
whenever data is queried. This is done to better support the agent cache, which
performs a fetch backoff sleep whenever ACL errors are encountered. Affected
endpoints are: `/v1/health/connect/` and `/v1/health/ingress/`.
* agent: configure server lastseen timestamp
Signed-off-by: Dan Bond <danbond@protonmail.com>
* use correct config
Signed-off-by: Dan Bond <danbond@protonmail.com>
* add comments
Signed-off-by: Dan Bond <danbond@protonmail.com>
* use default age in test golden data
Signed-off-by: Dan Bond <danbond@protonmail.com>
* add changelog
Signed-off-by: Dan Bond <danbond@protonmail.com>
* fix runtime test
Signed-off-by: Dan Bond <danbond@protonmail.com>
* agent: add server_metadata
Signed-off-by: Dan Bond <danbond@protonmail.com>
* update comments
Signed-off-by: Dan Bond <danbond@protonmail.com>
* correctly check if metadata file does not exist
Signed-off-by: Dan Bond <danbond@protonmail.com>
* follow instructions for adding new config
Signed-off-by: Dan Bond <danbond@protonmail.com>
* add comments
Signed-off-by: Dan Bond <danbond@protonmail.com>
* update comments
Signed-off-by: Dan Bond <danbond@protonmail.com>
* Update agent/agent.go
Co-authored-by: Dan Upton <daniel@floppy.co>
* agent/config: add validation for duration with min
Signed-off-by: Dan Bond <danbond@protonmail.com>
* docs: add new server_rejoin_age_max config definition
Signed-off-by: Dan Bond <danbond@protonmail.com>
* agent: add unit test for checking server last seen
Signed-off-by: Dan Bond <danbond@protonmail.com>
* agent: log continually for 60s before erroring
Signed-off-by: Dan Bond <danbond@protonmail.com>
* pr comments
Signed-off-by: Dan Bond <danbond@protonmail.com>
* remove unneeded todo
* agent: fix error message
Signed-off-by: Dan Bond <danbond@protonmail.com>
---------
Signed-off-by: Dan Bond <danbond@protonmail.com>
Co-authored-by: Dan Upton <daniel@floppy.co>
* snapshot: some improvments to the snapshot process
Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>
Co-authored-by: Chris S. Kim <ckim@hashicorp.com>
Remove outdated usage of "Consul Connect" instead of Consul service mesh.
The connect subsystem in Consul provides Consul's service mesh capabilities.
However, the term "Consul Connect" should not be used as an alternative to
the name "Consul service mesh".
* Add MaxEjectionPercent to config entry
* Add BaseEjectionTime to config entry
* Add MaxEjectionPercent and BaseEjectionTime to protobufs
* Add MaxEjectionPercent and BaseEjectionTime to api
* Fix integration test breakage
* Verify MaxEjectionPercent and BaseEjectionTime in integration test upstream confings
* Website docs for MaxEjectionPercent and BaseEjection time
* Add `make docs` to browse docs at http://localhost:3000
* Changelog entry
* so that is the difference between consul-docker and dev-docker
* blah
* update proto funcs
* update proto
---------
Co-authored-by: Maliz <maliheh.monshizadeh@hashicorp.com>
* added method for converting SamenessGroupConfigEntry
- added new method `ToQueryFailoverTargets` for converting a SamenessGroupConfigEntry's members to a list of QueryFailoverTargets
- renamed `ToFailoverTargets` ToServiceResolverFailoverTargets to distinguish it from `ToQueryFailoverTargets`
* Added SamenessGroup to PreparedQuery
- exposed Service.Partition to API when defining a prepared query
- added a method for determining if a QueryFailoverOptions is empty
- This will be useful for validation
- added unit tests
* added method for retrieving a SamenessGroup to state store
* added logic for using PQ with SamenessGroup
- added branching path for SamenessGroup handling in execute. It will be handled separate from the normal PQ case
- added a new interface so that the `GetSamenessGroupFailoverTargets` can be properly tested
- separated the execute logic into a `targetSelector` function so that it can be used for both failover and sameness group PQs
- split OSS only methods into new PQ OSS files
- added validation that `samenessGroup` is an enterprise only feature
* added documentation for PQ SamenessGroup
Prior to this change, peer services would be targeted by service-default
overrides as long as the new `peer` field was not found in the config entry.
This commit removes that deprecated backwards-compatibility behavior. Now
it is necessary to specify the `peer` field in order for upstream overrides
to apply to a peer upstream.
* Fix API GW broken link
* Update website/content/docs/api-gateway/upgrades.mdx
Co-authored-by: Tu Nguyen <im2nguyen@users.noreply.github.com>
---------
Co-authored-by: Tu Nguyen <im2nguyen@users.noreply.github.com>
This is part of an effort to raise awareness that you need to monitor
your mesh CA if coming from an external source as you'll need to manage
the rotation.
* converted intentions conf entry to ref CT format
* set up intentions nav
* add page for intentions usage
* final intentions usage page
* final intentions overview page
* fixed old relative links
* updated diagram for overview
* updated links to intentions content
* fixed typo in updated links
* rename intentions overview page file to index
* rollback link updates to intentions overview
* fixed nav
* Updated custom HTML in API and CLI pages to MD
* applied suggestions from review to index page
* moved conf examples from usage to conf ref
* missed custom HTML section
* applied additional feedback
* Apply suggestions from code review
Co-authored-by: Tu Nguyen <im2nguyen@users.noreply.github.com>
* updated headings in usage page
* renamed files and udpated nav
* updated links to new file names
* added redirects and final tweaks
* typo
---------
Co-authored-by: Tu Nguyen <im2nguyen@users.noreply.github.com>
* Fix broken links in Consul docs
* more broken link fixes
* more 404 fixes
* 404 fixes
* broken link fix
---------
Co-authored-by: Tu Nguyen <im2nguyen@users.noreply.github.com>
* First cluster grpc service should be NodePort
This is based on the issue opened here https://github.com/hashicorp/consul-k8s/issues/1903
If you follow the documentation https://developer.hashicorp.com/consul/docs/k8s/deployment-configurations/single-dc-multi-k8s exactly as it is, the first cluster will only create the consul UI service on NodePort but not the rest of the services (including for grpc). By default, from the helm chart, they are created as headless services by setting clusterIP None. This will cause an issue for the second cluster to discover consul server on the first cluster over gRPC as it cannot simply cannot through gRPC default port 8502 and it ends up in an error as shown in the issue https://github.com/hashicorp/consul-k8s/issues/1903
As a solution, the grpc service should be exposed using NodePort (or LoadBalancer). I added those changes required in both cluster1-values.yaml and cluster2-values.yaml, and also a description for those changes for the normal users to understand. Kindly review and I hope this PR will be accepted.
* Update website/content/docs/k8s/deployment-configurations/single-dc-multi-k8s.mdx
Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>
* Update website/content/docs/k8s/deployment-configurations/single-dc-multi-k8s.mdx
Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>
* Update website/content/docs/k8s/deployment-configurations/single-dc-multi-k8s.mdx
Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>
---------
Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>
Co-authored-by: Ashvitha Sridharan <ashvitha.sridharan@hashicorp.com>
Co-authored-by: Freddy <freddygv@users.noreply.github.com>
Add a new envoy flag: "envoy_hcp_metrics_bind_socket_dir", a directory
where a unix socket will be created with the name
`<namespace>_<proxy_id>.sock` to forward Envoy metrics.
If set, this will configure:
- In bootstrap configuration a local stats_sink and static cluster.
These will forward metrics to a loopback listener sent over xDS.
- A dynamic listener listening at the socket path that the previously
defined static cluster is sending metrics to.
- A dynamic cluster that will forward traffic received at this listener
to the hcp-metrics-collector service.
Reasons for having a static cluster pointing at a dynamic listener:
- We want to secure the metrics stream using TLS, but the stats sink can
only be defined in bootstrap config. With dynamic listeners/clusters
we can use the proxy's leaf certificate issued by the Connect CA,
which isn't available at bootstrap time.
- We want to intelligently route to the HCP collector. Configuring its
addreess at bootstrap time limits our flexibility routing-wise. More
on this below.
Reasons for defining the collector as an upstream in `proxycfg`:
- The HCP collector will be deployed as a mesh service.
- Certificate management is taken care of, as mentioned above.
- Service discovery and routing logic is automatically taken care of,
meaning that no code changes are required in the xds package.
- Custom routing rules can be added for the collector using discovery
chain config entries. Initially the collector is expected to be
deployed to each admin partition, but in the future could be deployed
centrally in the default partition. These config entries could even be
managed by HCP itself.
* fixes for unsupported partitions field in CRD metadata block
* Apply suggestions from code review
Co-authored-by: Luke Kysow <1034429+lkysow@users.noreply.github.com>
---------
Co-authored-by: Luke Kysow <1034429+lkysow@users.noreply.github.com>
* Update the consul-k8s cli docs for the new `proxy log` subcommand
* Updated consul-k8s docs from PR feedback
* Added proxy log command to release notes
Updated Params field to re-frame as supporting arguments specific to the
supported vault-agent auth-auth methods with links to each methods
"#configuration" section.
Included a call out limits on parameters supported.