* remove flush for each write to http response in the agent monitor endpoint
* fix race condition when we stop and start monitor multiple times, the doneCh is closed and never recover.
* start log reading goroutine before adding the sink to avoid filling the log channel before getting a chance of reading from it
* flush every 500ms to optimize log writing in the http server side.
* add changelog file
* add issue url to changelog
* fix changelog url
* Update changelog
Co-authored-by: Daniel Nephin <dnephin@hashicorp.com>
* use ticker to flush and avoid race condition when flushing in a different goroutine
* stop the ticker when done
Co-authored-by: Daniel Nephin <dnephin@hashicorp.com>
* Revert "fix race condition when we stop and start monitor multiple times, the doneCh is closed and never recover."
This reverts commit 1eeddf7a
* wait for log consumer loop to start before registering the sink
Co-authored-by: Daniel Nephin <dnephin@hashicorp.com>
Updates to a cluster will clear the associated endpoints, and updates to
a listener will clear the associated routes. Update the incremental xDS
logic to account for this implicit cleanup so that we can finish warming
the clusters and listeners.
Fixes#10379
CatalogDestinationsOnly is a passthrough that would enable dialing
addresses outside of Consul's catalog. However, when this flag is set to
true only _connect_ endpoints for services can be dialed.
This flag is being renamed to signal that non-Connect endpoints can't be
dialed by transparent proxies when the value is set to true.
Previously we would return an error if duplicate paths were specified.
This could lead to problems in cases where a user has the same path,
say /healthz, on two different ports.
This validation was added to signal a potential misconfiguration.
Instead we will only check for duplicate listener ports, since that is
what would lead to ambiguity issues when generating xDS config.
In the future we could look into using a single listener and creating
distinct filter chains for each path/port.
Previously if you were to follow these docs and register two external
services, you would set the Address field on the node. The second
registered service would change the address of the node for the first
service.
Now the docs explain the address key and how to register more than one
external service.
* updating hero with ecs info
* updates to hero
* Include back the Basic Hero styles
The basic hero is still used on the use case pages
* Revert the tsconfig changes
Nothing in the scope of this PR requires these changes!
* Remove the old Carousel CSS file
This is no longer needed as we're using the @hashicorp/react-hero
which comes with all the styling required for this carousel to work.
* Rename ConsulHero -> HomepageHero imports/exports
This will help prevent any confusion for future devs here -- this is a
convention we have that helps us from having to trace every import,
which helps us find the source of the component without actually having
to look at the import.
* Pin the deps
These were previously pinned to the exact version; including ^ will
allow minor & patch updates to sneak in, which normally shouldn't cause
an issue but we tend to be more conservative on dep upgrades.
* Revert unneeded changes to the document file
* Revert changes to app.js file
Not needed in the scope of this PR!
* Hard pin react-alert
* Remove unneeded css
Co-authored-by: Brandon Romano <brandon@hashicorp.com>
* debug: remove the CLI check for debug_enabled
The API allows collecting profiles even debug_enabled=false as long as
ACLs are enabled. Remove this check from the CLI so that users do not
need to set debug_enabled=true for no reason.
Also:
- fix the API client to return errors on non-200 status codes for debug
endpoints
- improve the failure messages when pprof data can not be collected
Co-Authored-By: Dhia Ayachi <dhia@hashicorp.com>
* remove parallel test runs
parallel runs create a race condition that fail the debug tests
* snapshot the timestamp at the beginning of the capture
- timestamp used to create the capture sub folder is snapshot only at the beginning of the capture and reused for subsequent captures
- capture append to the file if it already exist
* Revert "snapshot the timestamp at the beginning of the capture"
This reverts commit c2d03346
* Refactor captureDynamic to extract capture logic for each item in a different func
* snapshot the timestamp at the beginning of the capture
- timestamp used to create the capture sub folder is snapshot only at the beginning of the capture and reused for subsequent captures
- capture append to the file if it already exist
* Revert "snapshot the timestamp at the beginning of the capture"
This reverts commit c2d03346
* Refactor captureDynamic to extract capture logic for each item in a different func
* extract wait group outside the go routine to avoid a race condition
* capture pprof in a separate go routine
* perform a single capture for pprof data for the whole duration
* add missing vendor dependency
* add a change log and fix documentation to reflect the change
* create function for timestamp dir creation and simplify error handling
* use error groups and ticker to simplify interval capture loop
* Logs, profile and traces are captured for the full duration. Metrics, Heap and Go routines are captured every interval
* refactor Logs capture routine and add log capture specific test
* improve error reporting when log test fail
* change test duration to 1s
* make time parsing in log line more robust
* refactor log time format in a const
* test on log line empty the earliest possible and return
Co-authored-by: Freddy <freddygv@users.noreply.github.com>
* rename function to captureShortLived
* more specific changelog
Co-authored-by: Paul Banks <banks@banksco.de>
* update documentation to reflect current implementation
* add test for behavior when invalid param is passed to the command
* fix argument line in test
* a more detailed description of the new behaviour
Co-authored-by: Paul Banks <banks@banksco.de>
* print success right after the capture is done
* remove an unnecessary error check
Co-authored-by: Daniel Nephin <dnephin@hashicorp.com>
* upgraded github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57 => v0.0.0-20210601050228-01bbb1931b22
Co-authored-by: Daniel Nephin <dnephin@hashicorp.com>
Co-authored-by: Freddy <freddygv@users.noreply.github.com>
Co-authored-by: Paul Banks <banks@banksco.de>
* Docs for Unix Domain Sockets
There are a number of cases where a user might wish to either 1)
expose a service through a Unix Domain Socket in the filesystem
('downstream') or 2) connect to an upstream service by a local unix
domain socket (upstream).
As of Consul (1.10-beta2) we've added new syntax and support to configure
the Envoy proxy to support this
To connect to a service via local Unix Domain Socket instead of a
port, add local_bind_socket_path and optionally local_bind_socket_mode
to the upstream config for a service:
upstreams = [
{
destination_name = "service-1"
local_bind_socket_path = "/tmp/socket_service_1"
local_bind_socket_mode = "0700"
...
}
...
]
This will cause Envoy to create a socket with the path and mode
provided, and connect that to service-1
The mode field is optional, and if omitted will use the default mode
for Envoy. This is not applicable for abstract sockets. See
https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/core/v3/address.proto#envoy-v3-api-msg-config-core-v3-pipe
for details
NOTE: These options conflict the local_bind_socket_port and
local_bind_socket_address options. We can bind to an port or we can
bind to a socket, but not both.
To expose a service listening on a Unix Domain socket to the service
mesh use either the 'socket_path' field in the service definition or the
'local_service_socket_path' field in the proxy definition. These
fields are analogous to the 'port' and 'service_port' fields in their
respective locations.
services {
name = "service-2"
socket_path = "/tmp/socket_service_2"
...
}
OR
proxy {
local_service_socket_path = "/tmp/socket_service_2"
...
}
There is no mode field since the service is expected to create the
socket it is listening on, not the Envoy proxy.
Again, the socket_path and local_service_socket_path fields conflict
with address/port and local_service_address/local_service_port
configuration entries.
Set up a simple service mesh with dummy services:
socat -d UNIX-LISTEN:/tmp/downstream.sock,fork UNIX-CONNECT:/tmp/upstream.sock
socat -v tcp-l:4444,fork exec:/bin/cat
services {
name = "sock_forwarder"
id = "sock_forwarder.1"
socket_path = "/tmp/downstream.sock"
connect {
sidecar_service {
proxy {
upstreams = [
{
destination_name = "echo-service"
local_bind_socket_path = "/tmp/upstream.sock"
config {
passive_health_check {
interval = "10s"
max_failures = 42
}
}
}
]
}
}
}
}
services {
name = "echo-service"
port = 4444
connect = { sidecar_service {} }
Kind = "ingress-gateway"
Name = "ingress-service"
Listeners = [
{
Port = 8080
Protocol = "tcp"
Services = [
{
Name = "sock_forwarder"
}
]
}
]
consul agent -dev -enable-script-checks -config-dir=./consul.d
consul connect envoy -sidecar-for sock_forwarder.1
consul connect envoy -sidecar-for echo-service -admin-bind localhost:19001
consul config write ingress-gateway.hcl
consul connect envoy -gateway=ingress -register -service ingress-service -address '{{ GetInterfaceIP "eth0" }}:8888' -admin-bind localhost:19002
netcat 127.0.0.1 4444
netcat 127.0.0.1 8080
Signed-off-by: Mark Anderson <manderson@hashicorp.com>
* fixup Unix capitalization
Signed-off-by: Mark Anderson <manderson@hashicorp.com>
* Update website/content/docs/connect/registration/service-registration.mdx
Co-authored-by: Blake Covarrubias <blake@covarrubi.as>
* Provide examples in hcl and json
Signed-off-by: Mark Anderson <manderson@hashicorp.com>
* Apply suggestions from code review
Co-authored-by: Blake Covarrubias <blake@covarrubi.as>
* One more fixup for docs
Signed-off-by: Mark Anderson <manderson@hashicorp.com>
Co-authored-by: Blake Covarrubias <blake@covarrubi.as>
This PR adds cluster members to the metrics API. The number of members per
segment are reported as well as the total number of members.
Tested by running a multi-node cluster locally and ensuring the numbers were
correct. Also added unit test coverage to add the new expected gauges to
existing test cases.