* Enable filtering language support for the v1/connect/intentions listing API
* Update website for filtering of Intentions
* Update website/source/api/connect/intentions.html.md
* Change style to match "join" singular
- Replaced "(Consul) cluster" with "Consul Datacenter"
- Removed "ing" so the feature fits "Consul Auto-join", and that the tense is correct.
Co-authored-by: danielehc <40759828+danielehc@users.noreply.github.com>
* [docs] Built-in Proxies not meant for production
* Adding link to Envoy for Connect
* Update website/source/docs/connect/proxies/built-in.md
Co-Authored-By: Blake Covarrubias <blake@covarrubi.as>
* Revising note
* Update website/source/docs/connect/proxies/built-in.md
period
Co-Authored-By: Hans Hasselberg <me@hans.io>
Co-authored-by: Blake Covarrubias <blake@covarrubi.as>
Co-authored-by: Hans Hasselberg <me@hans.io>
Sometimes, in the CI, it could receive a SIGURG, producing this line:
FAIL: TestForwardSignals/signal-interrupt (0.06s)
util_test.go:286: expected to read line "signal: interrupt" but got "signal: urgent I/O condition"
Only forward the signals we test to avoid this kind of false positive
Example of such unstable errors in CI:
https://circleci.com/gh/hashicorp/consul/153571
Exposing checks is supposed to allow a Consul agent bound to a different
IP address (e.g., in a different Kubernetes pod) to access healthchecks
through the proxy while the underlying service binds to localhost. This
is an important security feature that makes sure no external traffic
reaches the service except through the proxy.
However, as far as I can tell, this is subtly broken in the case where
the Consul agent cannot reach the proxy over localhost.
If a proxy is configured with: `{ LocalServiceAddress: "127.0.0.1",
Checks: true }`, as is typical with a sidecar proxy, the Consul checks
are currently rewritten to `127.0.0.1:<random port>`. A Consul agent
that does not share the loopback address cannot reach this address. Just
to make sure I was not misunderstanding, I tried configuring the proxy
with `{ LocalServiceAddress: "<pod ip>", Checks: true }`. In this case,
while the checks are rewritten as expected and the agent can reach the
dynamic port, the proxy can no longer reach its backend because the
traffic is no longer on the loopback interface.
I think rewriting the checks to use `proxy.Address`, the proxy's own
address, is more correct in this case. That is the IP where the proxy
can be reached, both by other proxies and by a Consul agent running on
a different IP. The local service address should continue to use
`127.0.0.1` in most cases.
If a proxied service is a gRPC or HTTP2 service, but a path is exposed
using the HTTP1 or TCP protocol, Envoy should not be configured with
`http2ProtocolOptions` for the cluster backing the path.
A situation where this comes up is a gRPC service whose healthcheck or
metrics route (e.g. for Prometheus) is an HTTP1 service running on
a different port. Previously, if these were exposed either using
`Expose: { Checks: true }` or `Expose: { Paths: ... }`, Envoy would
still be configured to communicate with the path over HTTP2, which would
not work properly.
I spent some time today on my local Mac to figure out why Consul 1.6.3+
was not accepting limits.http_max_conns_per_client.
This adds an explicit check on number of file descriptors to be sure
it might work (this is no guarantee as if many clients are reaching
the agent, it might consume even more file descriptors)
Anyway, many users are fighting with RLIMIT_NOFILE, having a clear
message would allow them to figure out what to fix.
Example of message (reload or start):
```
2020-03-11T16:38:37.062+0100 [ERROR] agent: Error starting agent: error="system allows a max of 512 file descriptors, but limits.http_max_conns_per_client: 8192 needs at least 8212"
```
The previous PR which added these was accidentally performing the download
in the root directory. For the api, and sdk directories it should be in done
in the same directory that will be used to run tests. Otherwise the
wrong dependencies will be downloaded which may add unnecessary time to
the CI run.
This removes a race condition in reset since pendingPorts can be set to nil in reset()
If ticker is hit at wrong time, it would crash the unit test.
We ensure in reset to avoid this race condition by cancelling the goroutine using
killTicker chan.
We also properly clean up eveything, so garbage collector can work as expected.
To reproduce existing bug:
`while go test -timeout 30s github.com/hashicorp/consul/sdk/freeport -run '^(Test.*)$'; do go clean -testcache; done`
Will crash after a few 10s runs on my machine.
Error could be seen in unit tests sometimes:
[INFO] freeport: resetting the freeport package state
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x1125536]
goroutine 25 [running]:
container/list.(*List).Len(...)
/usr/local/Cellar/go/1.14/libexec/src/container/list/list.go:66
github.com/hashicorp/consul/sdk/freeport.checkFreedPortsOnce()
/Users/p.souchay/go/src/github.com/hashicorp/consul/sdk/freeport/freeport.go:157 +0x86
github.com/hashicorp/consul/sdk/freeport.checkFreedPorts()
/Users/p.souchay/go/src/github.com/hashicorp/consul/sdk/freeport/freeport.go:147 +0x71
created by github.com/hashicorp/consul/sdk/freeport.initialize
/Users/p.souchay/go/src/github.com/hashicorp/consul/sdk/freeport/freeport.go:113 +0x2cf
FAIL github.com/hashicorp/consul/sdk/freeport 1.607s
Due to merge #7562, upstream does not compile anymore.
Error is:
ERRO Running error: gofmt: analysis skipped: errors in package: [/Users/p.souchay/go/src/github.com/hashicorp/consul/agent/config_endpoint_test.go:188:33: too many arguments]
We need to detect whether an object is an ember-data snapshot or just a
plain object, and we where restricted from using `instanceof` due to
ember-data's `Snapshot` class being private.
We'd chosen to go with `constructor.name` instead, which seemed to work,
but when the javascript gets minifed the name of the contructor is also
minified and therefore is not what you are expecting.
This commit reverts to our original idea of checking for the existence
and type of the `.attributes` function, which is the function we require
within the conditional, and therefore is more reliable (if the function
doesn't exist it will error out during development aswell as production)