* Add MaxEjectionPercent to config entry
* Add BaseEjectionTime to config entry
* Add MaxEjectionPercent and BaseEjectionTime to protobufs
* Add MaxEjectionPercent and BaseEjectionTime to api
* Fix integration test breakage
* Verify MaxEjectionPercent and BaseEjectionTime in integration test upstream confings
* Website docs for MaxEjectionPercent and BaseEjection time
* Add `make docs` to browse docs at http://localhost:3000
* Changelog entry
* so that is the difference between consul-docker and dev-docker
* blah
* update proto funcs
* update proto
---------
Co-authored-by: Maliz <maliheh.monshizadeh@hashicorp.com>
* added method for converting SamenessGroupConfigEntry
- added new method `ToQueryFailoverTargets` for converting a SamenessGroupConfigEntry's members to a list of QueryFailoverTargets
- renamed `ToFailoverTargets` ToServiceResolverFailoverTargets to distinguish it from `ToQueryFailoverTargets`
* Added SamenessGroup to PreparedQuery
- exposed Service.Partition to API when defining a prepared query
- added a method for determining if a QueryFailoverOptions is empty
- This will be useful for validation
- added unit tests
* added method for retrieving a SamenessGroup to state store
* added logic for using PQ with SamenessGroup
- added branching path for SamenessGroup handling in execute. It will be handled separate from the normal PQ case
- added a new interface so that the `GetSamenessGroupFailoverTargets` can be properly tested
- separated the execute logic into a `targetSelector` function so that it can be used for both failover and sameness group PQs
- split OSS only methods into new PQ OSS files
- added validation that `samenessGroup` is an enterprise only feature
* added documentation for PQ SamenessGroup
Prior to this change, peer services would be targeted by service-default
overrides as long as the new `peer` field was not found in the config entry.
This commit removes that deprecated backwards-compatibility behavior. Now
it is necessary to specify the `peer` field in order for upstream overrides
to apply to a peer upstream.
* Fix API GW broken link
* Update website/content/docs/api-gateway/upgrades.mdx
Co-authored-by: Tu Nguyen <im2nguyen@users.noreply.github.com>
---------
Co-authored-by: Tu Nguyen <im2nguyen@users.noreply.github.com>
This is part of an effort to raise awareness that you need to monitor
your mesh CA if coming from an external source as you'll need to manage
the rotation.
* converted intentions conf entry to ref CT format
* set up intentions nav
* add page for intentions usage
* final intentions usage page
* final intentions overview page
* fixed old relative links
* updated diagram for overview
* updated links to intentions content
* fixed typo in updated links
* rename intentions overview page file to index
* rollback link updates to intentions overview
* fixed nav
* Updated custom HTML in API and CLI pages to MD
* applied suggestions from review to index page
* moved conf examples from usage to conf ref
* missed custom HTML section
* applied additional feedback
* Apply suggestions from code review
Co-authored-by: Tu Nguyen <im2nguyen@users.noreply.github.com>
* updated headings in usage page
* renamed files and udpated nav
* updated links to new file names
* added redirects and final tweaks
* typo
---------
Co-authored-by: Tu Nguyen <im2nguyen@users.noreply.github.com>
* Fix broken links in Consul docs
* more broken link fixes
* more 404 fixes
* 404 fixes
* broken link fix
---------
Co-authored-by: Tu Nguyen <im2nguyen@users.noreply.github.com>
* First cluster grpc service should be NodePort
This is based on the issue opened here https://github.com/hashicorp/consul-k8s/issues/1903
If you follow the documentation https://developer.hashicorp.com/consul/docs/k8s/deployment-configurations/single-dc-multi-k8s exactly as it is, the first cluster will only create the consul UI service on NodePort but not the rest of the services (including for grpc). By default, from the helm chart, they are created as headless services by setting clusterIP None. This will cause an issue for the second cluster to discover consul server on the first cluster over gRPC as it cannot simply cannot through gRPC default port 8502 and it ends up in an error as shown in the issue https://github.com/hashicorp/consul-k8s/issues/1903
As a solution, the grpc service should be exposed using NodePort (or LoadBalancer). I added those changes required in both cluster1-values.yaml and cluster2-values.yaml, and also a description for those changes for the normal users to understand. Kindly review and I hope this PR will be accepted.
* Update website/content/docs/k8s/deployment-configurations/single-dc-multi-k8s.mdx
Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>
* Update website/content/docs/k8s/deployment-configurations/single-dc-multi-k8s.mdx
Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>
* Update website/content/docs/k8s/deployment-configurations/single-dc-multi-k8s.mdx
Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>
---------
Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>
Co-authored-by: Ashvitha Sridharan <ashvitha.sridharan@hashicorp.com>
Co-authored-by: Freddy <freddygv@users.noreply.github.com>
Add a new envoy flag: "envoy_hcp_metrics_bind_socket_dir", a directory
where a unix socket will be created with the name
`<namespace>_<proxy_id>.sock` to forward Envoy metrics.
If set, this will configure:
- In bootstrap configuration a local stats_sink and static cluster.
These will forward metrics to a loopback listener sent over xDS.
- A dynamic listener listening at the socket path that the previously
defined static cluster is sending metrics to.
- A dynamic cluster that will forward traffic received at this listener
to the hcp-metrics-collector service.
Reasons for having a static cluster pointing at a dynamic listener:
- We want to secure the metrics stream using TLS, but the stats sink can
only be defined in bootstrap config. With dynamic listeners/clusters
we can use the proxy's leaf certificate issued by the Connect CA,
which isn't available at bootstrap time.
- We want to intelligently route to the HCP collector. Configuring its
addreess at bootstrap time limits our flexibility routing-wise. More
on this below.
Reasons for defining the collector as an upstream in `proxycfg`:
- The HCP collector will be deployed as a mesh service.
- Certificate management is taken care of, as mentioned above.
- Service discovery and routing logic is automatically taken care of,
meaning that no code changes are required in the xds package.
- Custom routing rules can be added for the collector using discovery
chain config entries. Initially the collector is expected to be
deployed to each admin partition, but in the future could be deployed
centrally in the default partition. These config entries could even be
managed by HCP itself.
* fixes for unsupported partitions field in CRD metadata block
* Apply suggestions from code review
Co-authored-by: Luke Kysow <1034429+lkysow@users.noreply.github.com>
---------
Co-authored-by: Luke Kysow <1034429+lkysow@users.noreply.github.com>