consul/agent/ui_endpoint_test.go

1739 lines
45 KiB
Go
Raw Normal View History

2014-04-23 19:57:06 +00:00
package agent
import (
"bytes"
2014-04-28 22:09:46 +00:00
"fmt"
2014-04-23 19:57:06 +00:00
"io"
"io/ioutil"
"net/http"
"net/http/httptest"
"net/url"
2014-04-23 19:57:06 +00:00
"path/filepath"
"sync/atomic"
2014-04-23 19:57:06 +00:00
"testing"
"github.com/hashicorp/consul/agent/config"
"github.com/hashicorp/consul/agent/structs"
"github.com/hashicorp/consul/api"
"github.com/hashicorp/consul/sdk/testutil"
"github.com/hashicorp/consul/sdk/testutil/retry"
"github.com/hashicorp/consul/testrpc"
New config parser, HCL support, multiple bind addrs (#3480) * new config parser for agent This patch implements a new config parser for the consul agent which makes the following changes to the previous implementation: * add HCL support * all configuration fragments in tests and for default config are expressed as HCL fragments * HCL fragments can be provided on the command line so that they can eventually replace the command line flags. * HCL/JSON fragments are parsed into a temporary Config structure which can be merged using reflection (all values are pointers). The existing merge logic of overwrite for values and append for slices has been preserved. * A single builder process generates a typed runtime configuration for the agent. The new implementation is more strict and fails in the builder process if no valid runtime configuration can be generated. Therefore, additional validations in other parts of the code should be removed. The builder also pre-computes all required network addresses so that no address/port magic should be required where the configuration is used and should therefore be removed. * Upgrade github.com/hashicorp/hcl to support int64 * improve error messages * fix directory permission test * Fix rtt test * Fix ForceLeave test * Skip performance test for now until we know what to do * Update github.com/hashicorp/memberlist to update log prefix * Make memberlist use the default logger * improve config error handling * do not fail on non-existing data-dir * experiment with non-uniform timeouts to get a handle on stalled leader elections * Run tests for packages separately to eliminate the spurious port conflicts * refactor private address detection and unify approach for ipv4 and ipv6. Fixes #2825 * do not allow unix sockets for DNS * improve bind and advertise addr error handling * go through builder using test coverage * minimal update to the docs * more coverage tests fixed * more tests * fix makefile * cleanup * fix port conflicts with external port server 'porter' * stop test server on error * do not run api test that change global ENV concurrently with the other tests * Run remaining api tests concurrently * no need for retry with the port number service * monkey patch race condition in go-sockaddr until we understand why that fails * monkey patch hcl decoder race condidtion until we understand why that fails * monkey patch spurious errors in strings.EqualFold from here * add test for hcl decoder race condition. Run with go test -parallel 128 * Increase timeout again * cleanup * don't log port allocations by default * use base command arg parsing to format help output properly * handle -dc deprecation case in Build * switch autopilot.max_trailing_logs to int * remove duplicate test case * remove unused methods * remove comments about flag/config value inconsistencies * switch got and want around since the error message was misleading. * Removes a stray debug log. * Removes a stray newline in imports. * Fixes TestACL_Version8. * Runs go fmt. * Adds a default case for unknown address types. * Reoders and reformats some imports. * Adds some comments and fixes typos. * Reorders imports. * add unix socket support for dns later * drop all deprecated flags and arguments * fix wrong field name * remove stray node-id file * drop unnecessary patch section in test * drop duplicate test * add test for LeaveOnTerm and SkipLeaveOnInt in client mode * drop "bla" and add clarifying comment for the test * split up tests to support enterprise/non-enterprise tests * drop raft multiplier and derive values during build phase * sanitize runtime config reflectively and add test * detect invalid config fields * fix tests with invalid config fields * use different values for wan sanitiziation test * drop recursor in favor of recursors * allow dns_config.udp_answer_limit to be zero * make sure tests run on machines with multiple ips * Fix failing tests in a few more places by providing a bind address in the test * Gets rid of skipped TestAgent_CheckPerformanceSettings and adds case for builder. * Add porter to server_test.go to make tests there less flaky * go fmt
2017-09-25 18:40:42 +00:00
cleanhttp "github.com/hashicorp/go-cleanhttp"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
2014-04-23 19:57:06 +00:00
)
func TestUiIndex(t *testing.T) {
t.Parallel()
2015-11-30 19:24:39 +00:00
// Make a test dir to serve UI files
uiDir := testutil.TempDir(t, "consul")
2015-11-30 19:24:39 +00:00
// Make the server
a := NewTestAgent(t, `
ui_config {
dir = "`+uiDir+`"
}
New config parser, HCL support, multiple bind addrs (#3480) * new config parser for agent This patch implements a new config parser for the consul agent which makes the following changes to the previous implementation: * add HCL support * all configuration fragments in tests and for default config are expressed as HCL fragments * HCL fragments can be provided on the command line so that they can eventually replace the command line flags. * HCL/JSON fragments are parsed into a temporary Config structure which can be merged using reflection (all values are pointers). The existing merge logic of overwrite for values and append for slices has been preserved. * A single builder process generates a typed runtime configuration for the agent. The new implementation is more strict and fails in the builder process if no valid runtime configuration can be generated. Therefore, additional validations in other parts of the code should be removed. The builder also pre-computes all required network addresses so that no address/port magic should be required where the configuration is used and should therefore be removed. * Upgrade github.com/hashicorp/hcl to support int64 * improve error messages * fix directory permission test * Fix rtt test * Fix ForceLeave test * Skip performance test for now until we know what to do * Update github.com/hashicorp/memberlist to update log prefix * Make memberlist use the default logger * improve config error handling * do not fail on non-existing data-dir * experiment with non-uniform timeouts to get a handle on stalled leader elections * Run tests for packages separately to eliminate the spurious port conflicts * refactor private address detection and unify approach for ipv4 and ipv6. Fixes #2825 * do not allow unix sockets for DNS * improve bind and advertise addr error handling * go through builder using test coverage * minimal update to the docs * more coverage tests fixed * more tests * fix makefile * cleanup * fix port conflicts with external port server 'porter' * stop test server on error * do not run api test that change global ENV concurrently with the other tests * Run remaining api tests concurrently * no need for retry with the port number service * monkey patch race condition in go-sockaddr until we understand why that fails * monkey patch hcl decoder race condidtion until we understand why that fails * monkey patch spurious errors in strings.EqualFold from here * add test for hcl decoder race condition. Run with go test -parallel 128 * Increase timeout again * cleanup * don't log port allocations by default * use base command arg parsing to format help output properly * handle -dc deprecation case in Build * switch autopilot.max_trailing_logs to int * remove duplicate test case * remove unused methods * remove comments about flag/config value inconsistencies * switch got and want around since the error message was misleading. * Removes a stray debug log. * Removes a stray newline in imports. * Fixes TestACL_Version8. * Runs go fmt. * Adds a default case for unknown address types. * Reoders and reformats some imports. * Adds some comments and fixes typos. * Reorders imports. * add unix socket support for dns later * drop all deprecated flags and arguments * fix wrong field name * remove stray node-id file * drop unnecessary patch section in test * drop duplicate test * add test for LeaveOnTerm and SkipLeaveOnInt in client mode * drop "bla" and add clarifying comment for the test * split up tests to support enterprise/non-enterprise tests * drop raft multiplier and derive values during build phase * sanitize runtime config reflectively and add test * detect invalid config fields * fix tests with invalid config fields * use different values for wan sanitiziation test * drop recursor in favor of recursors * allow dns_config.udp_answer_limit to be zero * make sure tests run on machines with multiple ips * Fix failing tests in a few more places by providing a bind address in the test * Gets rid of skipped TestAgent_CheckPerformanceSettings and adds case for builder. * Add porter to server_test.go to make tests there less flaky * go fmt
2017-09-25 18:40:42 +00:00
`)
defer a.Shutdown()
testrpc.WaitForLeader(t, a.RPC, "dc1")
2014-04-23 19:57:06 +00:00
// Create file
2020-09-10 16:25:56 +00:00
path := filepath.Join(a.Config.UIConfig.Dir, "my-file")
if err := ioutil.WriteFile(path, []byte("test"), 0644); err != nil {
2014-04-23 20:10:18 +00:00
t.Fatalf("err: %v", err)
}
2014-04-23 19:57:06 +00:00
// Request the custom file
req, _ := http.NewRequest("GET", "/ui/my-file", nil)
2014-04-28 21:52:30 +00:00
req.URL.Scheme = "http"
req.URL.Host = a.HTTPAddr()
2014-04-23 19:57:06 +00:00
// Make the request
client := cleanhttp.DefaultClient()
resp, err := client.Do(req)
2014-04-28 21:52:30 +00:00
if err != nil {
t.Fatalf("err: %v", err)
}
defer resp.Body.Close()
2014-04-23 19:57:06 +00:00
// Verify the response
2014-04-28 21:52:30 +00:00
if resp.StatusCode != 200 {
2014-04-23 19:57:06 +00:00
t.Fatalf("bad: %v", resp)
}
// Verify the body
out := bytes.NewBuffer(nil)
io.Copy(out, resp.Body)
if out.String() != "test" {
2014-04-23 19:57:06 +00:00
t.Fatalf("bad: %s", out.Bytes())
}
}
2014-04-28 21:52:30 +00:00
func TestUiNodes(t *testing.T) {
t.Parallel()
a := NewTestAgent(t, "")
defer a.Shutdown()
testrpc.WaitForTestAgent(t, a.RPC, "dc1")
2014-04-28 21:52:30 +00:00
args := &structs.RegisterRequest{
Datacenter: "dc1",
Node: "test",
Address: "127.0.0.1",
}
var out struct{}
if err := a.RPC("Catalog.Register", args, &out); err != nil {
t.Fatalf("err: %v", err)
}
req, _ := http.NewRequest("GET", "/v1/internal/ui/nodes/dc1", nil)
2014-04-28 21:52:30 +00:00
resp := httptest.NewRecorder()
obj, err := a.srv.UINodes(resp, req)
2014-04-28 21:52:30 +00:00
if err != nil {
t.Fatalf("err: %v", err)
}
assertIndex(t, resp)
// Should be 2 nodes, and all the empty lists should be non-nil
2014-04-28 21:52:30 +00:00
nodes := obj.(structs.NodeDump)
if len(nodes) != 2 ||
nodes[0].Node != a.Config.NodeName ||
nodes[0].Services == nil || len(nodes[0].Services) != 1 ||
nodes[0].Checks == nil || len(nodes[0].Checks) != 1 ||
nodes[1].Node != "test" ||
nodes[1].Services == nil || len(nodes[1].Services) != 0 ||
nodes[1].Checks == nil || len(nodes[1].Checks) != 0 {
2014-04-28 21:52:30 +00:00
t.Fatalf("bad: %v", obj)
}
}
2014-04-28 22:09:46 +00:00
func TestUiNodes_Filter(t *testing.T) {
t.Parallel()
a := NewTestAgent(t, "")
defer a.Shutdown()
testrpc.WaitForTestAgent(t, a.RPC, "dc1")
args := &structs.RegisterRequest{
Datacenter: "dc1",
Node: "test",
Address: "127.0.0.1",
NodeMeta: map[string]string{
"os": "linux",
},
}
var out struct{}
require.NoError(t, a.RPC("Catalog.Register", args, &out))
args = &structs.RegisterRequest{
Datacenter: "dc1",
Node: "test2",
Address: "127.0.0.1",
NodeMeta: map[string]string{
"os": "macos",
},
}
require.NoError(t, a.RPC("Catalog.Register", args, &out))
req, _ := http.NewRequest("GET", "/v1/internal/ui/nodes/dc1?filter="+url.QueryEscape("Meta.os == linux"), nil)
resp := httptest.NewRecorder()
obj, err := a.srv.UINodes(resp, req)
require.NoError(t, err)
assertIndex(t, resp)
// Should be 2 nodes, and all the empty lists should be non-nil
nodes := obj.(structs.NodeDump)
require.Len(t, nodes, 1)
require.Equal(t, nodes[0].Node, "test")
require.Empty(t, nodes[0].Services)
require.Empty(t, nodes[0].Checks)
}
2014-04-28 22:09:46 +00:00
func TestUiNodeInfo(t *testing.T) {
t.Parallel()
a := NewTestAgent(t, "")
defer a.Shutdown()
testrpc.WaitForLeader(t, a.RPC, "dc1")
2014-04-28 22:09:46 +00:00
req, _ := http.NewRequest("GET", fmt.Sprintf("/v1/internal/ui/node/%s", a.Config.NodeName), nil)
2014-04-28 22:09:46 +00:00
resp := httptest.NewRecorder()
obj, err := a.srv.UINodeInfo(resp, req)
require.NoError(t, err)
require.Equal(t, resp.Code, http.StatusOK)
2014-04-28 22:09:46 +00:00
assertIndex(t, resp)
// Should be 1 node for the server
node := obj.(*structs.NodeInfo)
if node.Node != a.Config.NodeName {
2014-04-28 22:09:46 +00:00
t.Fatalf("bad: %v", node)
}
args := &structs.RegisterRequest{
Datacenter: "dc1",
Node: "test",
Address: "127.0.0.1",
}
var out struct{}
if err := a.RPC("Catalog.Register", args, &out); err != nil {
t.Fatalf("err: %v", err)
}
req, _ = http.NewRequest("GET", "/v1/internal/ui/node/test", nil)
resp = httptest.NewRecorder()
obj, err = a.srv.UINodeInfo(resp, req)
if err != nil {
t.Fatalf("err: %v", err)
}
assertIndex(t, resp)
// Should be non-nil empty lists for services and checks
node = obj.(*structs.NodeInfo)
if node.Node != "test" ||
node.Services == nil || len(node.Services) != 0 ||
node.Checks == nil || len(node.Checks) != 0 {
t.Fatalf("bad: %v", node)
}
2014-04-28 22:09:46 +00:00
}
2014-04-28 22:52:37 +00:00
func TestUiServices(t *testing.T) {
t.Parallel()
a := NewTestAgent(t, "")
defer a.Shutdown()
testrpc.WaitForTestAgent(t, a.RPC, "dc1")
requests := []*structs.RegisterRequest{
// register foo node
{
Datacenter: "dc1",
Node: "foo",
Address: "127.0.0.1",
Checks: structs.HealthChecks{
&structs.HealthCheck{
Node: "foo",
Name: "node check",
Status: api.HealthPassing,
2014-04-28 22:52:37 +00:00
},
},
},
//register api service on node foo
{
Datacenter: "dc1",
Node: "foo",
SkipNodeUpdate: true,
Service: &structs.NodeService{
Kind: structs.ServiceKindTypical,
Service: "api",
2020-09-30 14:23:19 +00:00
ID: "api-1",
Tags: []string{"tag1", "tag2"},
},
Checks: structs.HealthChecks{
2014-04-28 22:52:37 +00:00
&structs.HealthCheck{
Node: "foo",
Name: "api svc check",
ServiceName: "api",
2020-09-30 14:23:19 +00:00
ServiceID: "api-1",
Status: api.HealthWarning,
2014-04-28 22:52:37 +00:00
},
},
},
2020-09-30 14:23:19 +00:00
// register api-proxy svc on node foo
{
Datacenter: "dc1",
Node: "foo",
SkipNodeUpdate: true,
Service: &structs.NodeService{
Kind: structs.ServiceKindConnectProxy,
2020-09-30 14:23:19 +00:00
Service: "api-proxy",
ID: "api-proxy-1",
Tags: []string{},
Meta: map[string]string{structs.MetaExternalSource: "k8s"},
Port: 1234,
Proxy: structs.ConnectProxyConfig{
DestinationServiceName: "api",
2014-04-28 22:52:37 +00:00
},
},
Checks: structs.HealthChecks{
2014-04-28 22:52:37 +00:00
&structs.HealthCheck{
Node: "foo",
2020-09-30 14:23:19 +00:00
Name: "api proxy listening",
ServiceName: "api-proxy",
ServiceID: "api-proxy-1",
Status: api.HealthPassing,
2014-04-28 22:52:37 +00:00
},
},
},
// register bar node with service web
{
Datacenter: "dc1",
Node: "bar",
Address: "127.0.0.2",
Service: &structs.NodeService{
2020-09-30 14:23:19 +00:00
Kind: structs.ServiceKindTypical,
Service: "web",
2020-09-30 14:23:19 +00:00
ID: "web-1",
Tags: []string{},
Meta: map[string]string{structs.MetaExternalSource: "k8s"},
Port: 1234,
2014-04-28 22:52:37 +00:00
},
Checks: []*structs.HealthCheck{
{
Node: "bar",
Name: "web svc check",
Status: api.HealthCritical,
2014-04-28 22:52:37 +00:00
ServiceName: "web",
2020-09-30 14:23:19 +00:00
ServiceID: "web-1",
2014-04-28 22:52:37 +00:00
},
},
},
// register zip node with service cache
{
Datacenter: "dc1",
Node: "zip",
Address: "127.0.0.3",
Service: &structs.NodeService{
Service: "cache",
Tags: []string{},
2014-04-28 22:52:37 +00:00
},
},
}
for _, args := range requests {
var out struct{}
require.NoError(t, a.RPC("Catalog.Register", args, &out))
2014-04-28 22:52:37 +00:00
}
// Register a terminating gateway associated with api and cache
{
arg := structs.RegisterRequest{
Datacenter: "dc1",
Node: "foo",
Address: "127.0.0.1",
Service: &structs.NodeService{
ID: "terminating-gateway",
Service: "terminating-gateway",
Kind: structs.ServiceKindTerminatingGateway,
Port: 443,
},
}
var regOutput struct{}
require.NoError(t, a.RPC("Catalog.Register", &arg, &regOutput))
args := &structs.TerminatingGatewayConfigEntry{
Name: "terminating-gateway",
Kind: structs.TerminatingGateway,
Services: []structs.LinkedService{
{
Name: "api",
},
{
Name: "cache",
},
},
}
req := structs.ConfigEntryRequest{
Op: structs.ConfigEntryUpsert,
Datacenter: "dc1",
Entry: args,
}
var configOutput bool
require.NoError(t, a.RPC("ConfigEntry.Apply", &req, &configOutput))
require.True(t, configOutput)
// Web should not show up as ConnectedWithGateway since this one does not have any instances
args = &structs.TerminatingGatewayConfigEntry{
Name: "other-terminating-gateway",
Kind: structs.TerminatingGateway,
Services: []structs.LinkedService{
{
Name: "web",
},
},
}
req = structs.ConfigEntryRequest{
Op: structs.ConfigEntryUpsert,
Datacenter: "dc1",
Entry: args,
}
require.NoError(t, a.RPC("ConfigEntry.Apply", &req, &configOutput))
require.True(t, configOutput)
}
t.Run("No Filter", func(t *testing.T) {
t.Parallel()
req, _ := http.NewRequest("GET", "/v1/internal/ui/services/dc1", nil)
resp := httptest.NewRecorder()
obj, err := a.srv.UIServices(resp, req)
require.NoError(t, err)
assertIndex(t, resp)
2014-04-28 22:52:37 +00:00
// Should be 2 nodes, and all the empty lists should be non-nil
2020-09-30 14:23:19 +00:00
summary := obj.([]*ServiceListingSummary)
require.Len(t, summary, 6)
2014-04-28 22:52:37 +00:00
// internal accounting that users don't see can be blown away
for _, sum := range summary {
sum.externalSourceSet = nil
2020-09-30 14:23:19 +00:00
sum.checks = nil
}
2020-09-30 14:23:19 +00:00
expected := []*ServiceListingSummary{
{
2020-09-30 14:23:19 +00:00
ServiceSummary: ServiceSummary{
Kind: structs.ServiceKindTypical,
Name: "api",
Datacenter: "dc1",
Tags: []string{"tag1", "tag2"},
Nodes: []string{"foo"},
InstanceCount: 1,
ChecksPassing: 2,
ChecksWarning: 1,
ChecksCritical: 0,
EnterpriseMeta: *structs.DefaultEnterpriseMeta(),
},
ConnectedWithProxy: true,
ConnectedWithGateway: true,
},
{
2020-09-30 14:23:19 +00:00
ServiceSummary: ServiceSummary{
Kind: structs.ServiceKindConnectProxy,
Name: "api-proxy",
Datacenter: "dc1",
Tags: nil,
Nodes: []string{"foo"},
InstanceCount: 1,
ChecksPassing: 2,
ChecksWarning: 0,
ChecksCritical: 0,
ExternalSources: []string{"k8s"},
EnterpriseMeta: *structs.DefaultEnterpriseMeta(),
},
},
{
ServiceSummary: ServiceSummary{
Kind: structs.ServiceKindTypical,
Name: "cache",
Datacenter: "dc1",
Tags: nil,
Nodes: []string{"zip"},
InstanceCount: 1,
ChecksPassing: 0,
ChecksWarning: 0,
ChecksCritical: 0,
EnterpriseMeta: *structs.DefaultEnterpriseMeta(),
},
ConnectedWithGateway: true,
},
{
2020-09-30 14:23:19 +00:00
ServiceSummary: ServiceSummary{
Kind: structs.ServiceKindTypical,
Name: "consul",
Datacenter: "dc1",
Tags: nil,
Nodes: []string{a.Config.NodeName},
InstanceCount: 1,
ChecksPassing: 1,
ChecksWarning: 0,
ChecksCritical: 0,
EnterpriseMeta: *structs.DefaultEnterpriseMeta(),
},
},
{
2020-09-30 14:23:19 +00:00
ServiceSummary: ServiceSummary{
Kind: structs.ServiceKindTerminatingGateway,
Name: "terminating-gateway",
Datacenter: "dc1",
Tags: nil,
Nodes: []string{"foo"},
InstanceCount: 1,
ChecksPassing: 1,
ChecksWarning: 0,
ChecksCritical: 0,
GatewayConfig: GatewayConfig{AssociatedServiceCount: 2},
EnterpriseMeta: *structs.DefaultEnterpriseMeta(),
},
},
{
2020-09-30 14:23:19 +00:00
ServiceSummary: ServiceSummary{
Kind: structs.ServiceKindTypical,
Name: "web",
Datacenter: "dc1",
Tags: nil,
Nodes: []string{"bar"},
InstanceCount: 1,
ChecksPassing: 0,
ChecksWarning: 0,
ChecksCritical: 1,
ExternalSources: []string{"k8s"},
EnterpriseMeta: *structs.DefaultEnterpriseMeta(),
},
},
}
require.ElementsMatch(t, expected, summary)
})
t.Run("Filtered", func(t *testing.T) {
filterQuery := url.QueryEscape("Service.Service == web or Service.Service == api")
req, _ := http.NewRequest("GET", "/v1/internal/ui/services?filter="+filterQuery, nil)
resp := httptest.NewRecorder()
obj, err := a.srv.UIServices(resp, req)
require.NoError(t, err)
assertIndex(t, resp)
// Should be 2 nodes, and all the empty lists should be non-nil
2020-09-30 14:23:19 +00:00
summary := obj.([]*ServiceListingSummary)
require.Len(t, summary, 2)
// internal accounting that users don't see can be blown away
for _, sum := range summary {
sum.externalSourceSet = nil
2020-09-30 14:23:19 +00:00
sum.checks = nil
}
2020-09-30 14:23:19 +00:00
expected := []*ServiceListingSummary{
{
2020-09-30 14:23:19 +00:00
ServiceSummary: ServiceSummary{
Kind: structs.ServiceKindTypical,
Name: "api",
Datacenter: "dc1",
Tags: []string{"tag1", "tag2"},
Nodes: []string{"foo"},
InstanceCount: 1,
ChecksPassing: 1,
ChecksWarning: 1,
ChecksCritical: 0,
EnterpriseMeta: *structs.DefaultEnterpriseMeta(),
},
ConnectedWithProxy: false,
ConnectedWithGateway: false,
},
{
2020-09-30 14:23:19 +00:00
ServiceSummary: ServiceSummary{
Kind: structs.ServiceKindTypical,
Name: "web",
Datacenter: "dc1",
Tags: nil,
Nodes: []string{"bar"},
InstanceCount: 1,
ChecksPassing: 0,
ChecksWarning: 0,
ChecksCritical: 1,
ExternalSources: []string{"k8s"},
EnterpriseMeta: *structs.DefaultEnterpriseMeta(),
},
},
}
require.ElementsMatch(t, expected, summary)
})
2014-04-28 22:52:37 +00:00
}
func TestUIGatewayServiceNodes_Terminating(t *testing.T) {
t.Parallel()
a := NewTestAgent(t, "")
defer a.Shutdown()
// Register terminating gateway and a service that will be associated with it
{
arg := structs.RegisterRequest{
Datacenter: "dc1",
Node: "foo",
Address: "127.0.0.1",
Service: &structs.NodeService{
ID: "terminating-gateway",
Service: "terminating-gateway",
Kind: structs.ServiceKindTerminatingGateway,
Port: 443,
},
Check: &structs.HealthCheck{
Name: "terminating connect",
Status: api.HealthPassing,
ServiceID: "terminating-gateway",
},
}
var regOutput struct{}
require.NoError(t, a.RPC("Catalog.Register", &arg, &regOutput))
arg = structs.RegisterRequest{
Datacenter: "dc1",
Node: "bar",
Address: "127.0.0.2",
Service: &structs.NodeService{
ID: "db",
Service: "db",
Tags: []string{"primary"},
},
Check: &structs.HealthCheck{
Name: "db-warning",
Status: api.HealthWarning,
ServiceID: "db",
},
}
require.NoError(t, a.RPC("Catalog.Register", &arg, &regOutput))
arg = structs.RegisterRequest{
Datacenter: "dc1",
Node: "baz",
Address: "127.0.0.3",
Service: &structs.NodeService{
ID: "db2",
Service: "db",
Tags: []string{"backup"},
},
Check: &structs.HealthCheck{
Name: "db2-passing",
Status: api.HealthPassing,
ServiceID: "db2",
},
}
require.NoError(t, a.RPC("Catalog.Register", &arg, &regOutput))
// Register terminating-gateway config entry, linking it to db and redis (does not exist)
args := &structs.TerminatingGatewayConfigEntry{
Name: "terminating-gateway",
Kind: structs.TerminatingGateway,
Services: []structs.LinkedService{
{
Name: "db",
},
{
Name: "redis",
CAFile: "/etc/certs/ca.pem",
CertFile: "/etc/certs/cert.pem",
KeyFile: "/etc/certs/key.pem",
},
},
}
req := structs.ConfigEntryRequest{
Op: structs.ConfigEntryUpsert,
Datacenter: "dc1",
Entry: args,
}
var configOutput bool
require.NoError(t, a.RPC("ConfigEntry.Apply", &req, &configOutput))
require.True(t, configOutput)
}
// Request
req, _ := http.NewRequest("GET", "/v1/internal/ui/gateway-services-nodes/terminating-gateway", nil)
resp := httptest.NewRecorder()
obj, err := a.srv.UIGatewayServicesNodes(resp, req)
connect: intentions are now managed as a new config entry kind "service-intentions" (#8834) - Upgrade the ConfigEntry.ListAll RPC to be kind-aware so that older copies of consul will not see new config entries it doesn't understand replicate down. - Add shim conversion code so that the old API/CLI method of interacting with intentions will continue to work so long as none of these are edited via config entry endpoints. Almost all of the read-only APIs will continue to function indefinitely. - Add new APIs that operate on individual intentions without IDs so that the UI doesn't need to implement CAS operations. - Add a new serf feature flag indicating support for intentions-as-config-entries. - The old line-item intentions way of interacting with the state store will transparently flip between the legacy memdb table and the config entry representations so that readers will never see a hiccup during migration where the results are incomplete. It uses a piece of system metadata to control the flip. - The primary datacenter will begin migrating intentions into config entries on startup once all servers in the datacenter are on a version of Consul with the intentions-as-config-entries feature flag. When it is complete the old state store representations will be cleared. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up. - The secondary datacenters continue to run the old intentions replicator until all servers in the secondary DC and primary DC support intentions-as-config-entries (via serf flag). Once this condition it met the old intentions replicator ceases. - The secondary datacenters replicate the new config entries as they are migrated in the primary. When they detect that the primary has zeroed it's old state store table it waits until all config entries up to that point are replicated and then zeroes its own copy of the old state store table. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up.
2020-10-06 18:24:05 +00:00
require.Nil(t, err)
assertIndex(t, resp)
2020-09-30 14:23:19 +00:00
summary := obj.([]*ServiceSummary)
// internal accounting that users don't see can be blown away
for _, sum := range summary {
sum.externalSourceSet = nil
sum.checks = nil
}
expect := []*ServiceSummary{
{
Name: "redis",
EnterpriseMeta: *structs.DefaultEnterpriseMeta(),
},
{
Name: "db",
2020-09-30 14:23:19 +00:00
Datacenter: "dc1",
Tags: []string{"backup", "primary"},
Nodes: []string{"bar", "baz"},
InstanceCount: 2,
ChecksPassing: 1,
ChecksWarning: 1,
ChecksCritical: 0,
EnterpriseMeta: *structs.DefaultEnterpriseMeta(),
},
}
connect: intentions are now managed as a new config entry kind "service-intentions" (#8834) - Upgrade the ConfigEntry.ListAll RPC to be kind-aware so that older copies of consul will not see new config entries it doesn't understand replicate down. - Add shim conversion code so that the old API/CLI method of interacting with intentions will continue to work so long as none of these are edited via config entry endpoints. Almost all of the read-only APIs will continue to function indefinitely. - Add new APIs that operate on individual intentions without IDs so that the UI doesn't need to implement CAS operations. - Add a new serf feature flag indicating support for intentions-as-config-entries. - The old line-item intentions way of interacting with the state store will transparently flip between the legacy memdb table and the config entry representations so that readers will never see a hiccup during migration where the results are incomplete. It uses a piece of system metadata to control the flip. - The primary datacenter will begin migrating intentions into config entries on startup once all servers in the datacenter are on a version of Consul with the intentions-as-config-entries feature flag. When it is complete the old state store representations will be cleared. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up. - The secondary datacenters continue to run the old intentions replicator until all servers in the secondary DC and primary DC support intentions-as-config-entries (via serf flag). Once this condition it met the old intentions replicator ceases. - The secondary datacenters replicate the new config entries as they are migrated in the primary. When they detect that the primary has zeroed it's old state store table it waits until all config entries up to that point are replicated and then zeroes its own copy of the old state store table. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up.
2020-10-06 18:24:05 +00:00
require.ElementsMatch(t, expect, summary)
}
func TestUIGatewayServiceNodes_Ingress(t *testing.T) {
t.Parallel()
a := NewTestAgent(t, `alt_domain = "alt.consul."`)
defer a.Shutdown()
// Register ingress gateway and a service that will be associated with it
{
arg := structs.RegisterRequest{
Datacenter: "dc1",
Node: "foo",
Address: "127.0.0.1",
Service: &structs.NodeService{
ID: "ingress-gateway",
Service: "ingress-gateway",
Kind: structs.ServiceKindIngressGateway,
Port: 8443,
},
Check: &structs.HealthCheck{
Name: "ingress connect",
Status: api.HealthPassing,
ServiceID: "ingress-gateway",
},
}
var regOutput struct{}
require.NoError(t, a.RPC("Catalog.Register", &arg, &regOutput))
arg = structs.RegisterRequest{
Datacenter: "dc1",
Node: "bar",
Address: "127.0.0.2",
Service: &structs.NodeService{
ID: "db",
Service: "db",
Tags: []string{"primary"},
},
Check: &structs.HealthCheck{
Name: "db-warning",
Status: api.HealthWarning,
ServiceID: "db",
},
}
require.NoError(t, a.RPC("Catalog.Register", &arg, &regOutput))
arg = structs.RegisterRequest{
Datacenter: "dc1",
Node: "baz",
Address: "127.0.0.3",
Service: &structs.NodeService{
ID: "db2",
Service: "db",
Tags: []string{"backup"},
},
Check: &structs.HealthCheck{
Name: "db2-passing",
Status: api.HealthPassing,
ServiceID: "db2",
},
}
require.NoError(t, a.RPC("Catalog.Register", &arg, &regOutput))
// Set web protocol to http
svcDefaultsReq := structs.ConfigEntryRequest{
Datacenter: "dc1",
Entry: &structs.ServiceConfigEntry{
Name: "web",
Protocol: "http",
},
}
var configOutput bool
require.NoError(t, a.RPC("ConfigEntry.Apply", &svcDefaultsReq, &configOutput))
require.True(t, configOutput)
// Register ingress-gateway config entry, linking it to db and redis (does not exist)
args := &structs.IngressGatewayConfigEntry{
Name: "ingress-gateway",
Kind: structs.IngressGateway,
Listeners: []structs.IngressListener{
{
Port: 8888,
Protocol: "tcp",
Services: []structs.IngressService{
{
Name: "db",
},
},
},
{
Port: 8080,
Protocol: "http",
Services: []structs.IngressService{
{
Name: "web",
},
},
},
{
Port: 8081,
Protocol: "http",
Services: []structs.IngressService{
{
Name: "web",
Hosts: []string{"*.test.example.com"},
},
},
},
},
}
req := structs.ConfigEntryRequest{
Op: structs.ConfigEntryUpsert,
Datacenter: "dc1",
Entry: args,
}
require.NoError(t, a.RPC("ConfigEntry.Apply", &req, &configOutput))
require.True(t, configOutput)
}
// Request
req, _ := http.NewRequest("GET", "/v1/internal/ui/gateway-services-nodes/ingress-gateway", nil)
resp := httptest.NewRecorder()
obj, err := a.srv.UIGatewayServicesNodes(resp, req)
connect: intentions are now managed as a new config entry kind "service-intentions" (#8834) - Upgrade the ConfigEntry.ListAll RPC to be kind-aware so that older copies of consul will not see new config entries it doesn't understand replicate down. - Add shim conversion code so that the old API/CLI method of interacting with intentions will continue to work so long as none of these are edited via config entry endpoints. Almost all of the read-only APIs will continue to function indefinitely. - Add new APIs that operate on individual intentions without IDs so that the UI doesn't need to implement CAS operations. - Add a new serf feature flag indicating support for intentions-as-config-entries. - The old line-item intentions way of interacting with the state store will transparently flip between the legacy memdb table and the config entry representations so that readers will never see a hiccup during migration where the results are incomplete. It uses a piece of system metadata to control the flip. - The primary datacenter will begin migrating intentions into config entries on startup once all servers in the datacenter are on a version of Consul with the intentions-as-config-entries feature flag. When it is complete the old state store representations will be cleared. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up. - The secondary datacenters continue to run the old intentions replicator until all servers in the secondary DC and primary DC support intentions-as-config-entries (via serf flag). Once this condition it met the old intentions replicator ceases. - The secondary datacenters replicate the new config entries as they are migrated in the primary. When they detect that the primary has zeroed it's old state store table it waits until all config entries up to that point are replicated and then zeroes its own copy of the old state store table. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up.
2020-10-06 18:24:05 +00:00
require.Nil(t, err)
assertIndex(t, resp)
// Construct expected addresses so that differences between OSS/Ent are handled by code
webDNS := serviceIngressDNSName("web", "dc1", "consul.", structs.DefaultEnterpriseMeta())
webDNSAlt := serviceIngressDNSName("web", "dc1", "alt.consul.", structs.DefaultEnterpriseMeta())
dbDNS := serviceIngressDNSName("db", "dc1", "consul.", structs.DefaultEnterpriseMeta())
dbDNSAlt := serviceIngressDNSName("db", "dc1", "alt.consul.", structs.DefaultEnterpriseMeta())
dump := obj.([]*ServiceSummary)
expect := []*ServiceSummary{
{
Name: "web",
GatewayConfig: GatewayConfig{
Addresses: []string{
fmt.Sprintf("%s:8080", webDNS),
fmt.Sprintf("%s:8080", webDNSAlt),
"*.test.example.com:8081",
},
},
EnterpriseMeta: *structs.DefaultEnterpriseMeta(),
},
{
Name: "db",
2020-09-30 14:23:19 +00:00
Datacenter: "dc1",
Tags: []string{"backup", "primary"},
Nodes: []string{"bar", "baz"},
InstanceCount: 2,
ChecksPassing: 1,
ChecksWarning: 1,
ChecksCritical: 0,
GatewayConfig: GatewayConfig{
Addresses: []string{
fmt.Sprintf("%s:8888", dbDNS),
fmt.Sprintf("%s:8888", dbDNSAlt),
},
},
EnterpriseMeta: *structs.DefaultEnterpriseMeta(),
},
}
// internal accounting that users don't see can be blown away
for _, sum := range dump {
sum.GatewayConfig.addressesSet = nil
2020-09-30 14:23:19 +00:00
sum.checks = nil
}
connect: intentions are now managed as a new config entry kind "service-intentions" (#8834) - Upgrade the ConfigEntry.ListAll RPC to be kind-aware so that older copies of consul will not see new config entries it doesn't understand replicate down. - Add shim conversion code so that the old API/CLI method of interacting with intentions will continue to work so long as none of these are edited via config entry endpoints. Almost all of the read-only APIs will continue to function indefinitely. - Add new APIs that operate on individual intentions without IDs so that the UI doesn't need to implement CAS operations. - Add a new serf feature flag indicating support for intentions-as-config-entries. - The old line-item intentions way of interacting with the state store will transparently flip between the legacy memdb table and the config entry representations so that readers will never see a hiccup during migration where the results are incomplete. It uses a piece of system metadata to control the flip. - The primary datacenter will begin migrating intentions into config entries on startup once all servers in the datacenter are on a version of Consul with the intentions-as-config-entries feature flag. When it is complete the old state store representations will be cleared. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up. - The secondary datacenters continue to run the old intentions replicator until all servers in the secondary DC and primary DC support intentions-as-config-entries (via serf flag). Once this condition it met the old intentions replicator ceases. - The secondary datacenters replicate the new config entries as they are migrated in the primary. When they detect that the primary has zeroed it's old state store table it waits until all config entries up to that point are replicated and then zeroes its own copy of the old state store table. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up.
2020-10-06 18:24:05 +00:00
require.ElementsMatch(t, expect, dump)
}
func TestUIGatewayIntentions(t *testing.T) {
t.Parallel()
a := NewTestAgent(t, "")
defer a.Shutdown()
testrpc.WaitForServiceIntentions(t, a.RPC, "dc1")
// Register terminating gateway and config entry linking it to postgres + redis
{
arg := structs.RegisterRequest{
Datacenter: "dc1",
Node: "foo",
Address: "127.0.0.1",
Service: &structs.NodeService{
ID: "terminating-gateway",
Service: "terminating-gateway",
Kind: structs.ServiceKindTerminatingGateway,
Port: 443,
},
Check: &structs.HealthCheck{
Name: "terminating connect",
Status: api.HealthPassing,
ServiceID: "terminating-gateway",
},
}
var regOutput struct{}
require.NoError(t, a.RPC("Catalog.Register", &arg, &regOutput))
args := &structs.TerminatingGatewayConfigEntry{
Name: "terminating-gateway",
Kind: structs.TerminatingGateway,
Services: []structs.LinkedService{
{
Name: "postgres",
},
{
Name: "redis",
CAFile: "/etc/certs/ca.pem",
CertFile: "/etc/certs/cert.pem",
KeyFile: "/etc/certs/key.pem",
},
},
}
req := structs.ConfigEntryRequest{
Op: structs.ConfigEntryUpsert,
Datacenter: "dc1",
Entry: args,
}
var configOutput bool
require.NoError(t, a.RPC("ConfigEntry.Apply", &req, &configOutput))
require.True(t, configOutput)
}
// create some symmetric intentions to ensure we are only matching on destination
{
for _, v := range []string{"*", "mysql", "redis", "postgres"} {
req := structs.IntentionRequest{
Datacenter: "dc1",
Op: structs.IntentionOpCreate,
Intention: structs.TestIntention(t),
}
req.Intention.SourceName = "api"
req.Intention.DestinationName = v
var reply string
connect: intentions are now managed as a new config entry kind "service-intentions" (#8834) - Upgrade the ConfigEntry.ListAll RPC to be kind-aware so that older copies of consul will not see new config entries it doesn't understand replicate down. - Add shim conversion code so that the old API/CLI method of interacting with intentions will continue to work so long as none of these are edited via config entry endpoints. Almost all of the read-only APIs will continue to function indefinitely. - Add new APIs that operate on individual intentions without IDs so that the UI doesn't need to implement CAS operations. - Add a new serf feature flag indicating support for intentions-as-config-entries. - The old line-item intentions way of interacting with the state store will transparently flip between the legacy memdb table and the config entry representations so that readers will never see a hiccup during migration where the results are incomplete. It uses a piece of system metadata to control the flip. - The primary datacenter will begin migrating intentions into config entries on startup once all servers in the datacenter are on a version of Consul with the intentions-as-config-entries feature flag. When it is complete the old state store representations will be cleared. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up. - The secondary datacenters continue to run the old intentions replicator until all servers in the secondary DC and primary DC support intentions-as-config-entries (via serf flag). Once this condition it met the old intentions replicator ceases. - The secondary datacenters replicate the new config entries as they are migrated in the primary. When they detect that the primary has zeroed it's old state store table it waits until all config entries up to that point are replicated and then zeroes its own copy of the old state store table. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up.
2020-10-06 18:24:05 +00:00
require.NoError(t, a.RPC("Intention.Apply", &req, &reply))
req = structs.IntentionRequest{
Datacenter: "dc1",
Op: structs.IntentionOpCreate,
Intention: structs.TestIntention(t),
}
req.Intention.SourceName = v
req.Intention.DestinationName = "api"
connect: intentions are now managed as a new config entry kind "service-intentions" (#8834) - Upgrade the ConfigEntry.ListAll RPC to be kind-aware so that older copies of consul will not see new config entries it doesn't understand replicate down. - Add shim conversion code so that the old API/CLI method of interacting with intentions will continue to work so long as none of these are edited via config entry endpoints. Almost all of the read-only APIs will continue to function indefinitely. - Add new APIs that operate on individual intentions without IDs so that the UI doesn't need to implement CAS operations. - Add a new serf feature flag indicating support for intentions-as-config-entries. - The old line-item intentions way of interacting with the state store will transparently flip between the legacy memdb table and the config entry representations so that readers will never see a hiccup during migration where the results are incomplete. It uses a piece of system metadata to control the flip. - The primary datacenter will begin migrating intentions into config entries on startup once all servers in the datacenter are on a version of Consul with the intentions-as-config-entries feature flag. When it is complete the old state store representations will be cleared. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up. - The secondary datacenters continue to run the old intentions replicator until all servers in the secondary DC and primary DC support intentions-as-config-entries (via serf flag). Once this condition it met the old intentions replicator ceases. - The secondary datacenters replicate the new config entries as they are migrated in the primary. When they detect that the primary has zeroed it's old state store table it waits until all config entries up to that point are replicated and then zeroes its own copy of the old state store table. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up.
2020-10-06 18:24:05 +00:00
require.NoError(t, a.RPC("Intention.Apply", &req, &reply))
}
}
// Request intentions matching the gateway named "terminating-gateway"
req, _ := http.NewRequest("GET", "/v1/internal/ui/gateway-intentions/terminating-gateway", nil)
resp := httptest.NewRecorder()
obj, err := a.srv.UIGatewayIntentions(resp, req)
connect: intentions are now managed as a new config entry kind "service-intentions" (#8834) - Upgrade the ConfigEntry.ListAll RPC to be kind-aware so that older copies of consul will not see new config entries it doesn't understand replicate down. - Add shim conversion code so that the old API/CLI method of interacting with intentions will continue to work so long as none of these are edited via config entry endpoints. Almost all of the read-only APIs will continue to function indefinitely. - Add new APIs that operate on individual intentions without IDs so that the UI doesn't need to implement CAS operations. - Add a new serf feature flag indicating support for intentions-as-config-entries. - The old line-item intentions way of interacting with the state store will transparently flip between the legacy memdb table and the config entry representations so that readers will never see a hiccup during migration where the results are incomplete. It uses a piece of system metadata to control the flip. - The primary datacenter will begin migrating intentions into config entries on startup once all servers in the datacenter are on a version of Consul with the intentions-as-config-entries feature flag. When it is complete the old state store representations will be cleared. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up. - The secondary datacenters continue to run the old intentions replicator until all servers in the secondary DC and primary DC support intentions-as-config-entries (via serf flag). Once this condition it met the old intentions replicator ceases. - The secondary datacenters replicate the new config entries as they are migrated in the primary. When they detect that the primary has zeroed it's old state store table it waits until all config entries up to that point are replicated and then zeroes its own copy of the old state store table. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up.
2020-10-06 18:24:05 +00:00
require.Nil(t, err)
assertIndex(t, resp)
intentions := obj.(structs.Intentions)
connect: intentions are now managed as a new config entry kind "service-intentions" (#8834) - Upgrade the ConfigEntry.ListAll RPC to be kind-aware so that older copies of consul will not see new config entries it doesn't understand replicate down. - Add shim conversion code so that the old API/CLI method of interacting with intentions will continue to work so long as none of these are edited via config entry endpoints. Almost all of the read-only APIs will continue to function indefinitely. - Add new APIs that operate on individual intentions without IDs so that the UI doesn't need to implement CAS operations. - Add a new serf feature flag indicating support for intentions-as-config-entries. - The old line-item intentions way of interacting with the state store will transparently flip between the legacy memdb table and the config entry representations so that readers will never see a hiccup during migration where the results are incomplete. It uses a piece of system metadata to control the flip. - The primary datacenter will begin migrating intentions into config entries on startup once all servers in the datacenter are on a version of Consul with the intentions-as-config-entries feature flag. When it is complete the old state store representations will be cleared. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up. - The secondary datacenters continue to run the old intentions replicator until all servers in the secondary DC and primary DC support intentions-as-config-entries (via serf flag). Once this condition it met the old intentions replicator ceases. - The secondary datacenters replicate the new config entries as they are migrated in the primary. When they detect that the primary has zeroed it's old state store table it waits until all config entries up to that point are replicated and then zeroes its own copy of the old state store table. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up.
2020-10-06 18:24:05 +00:00
require.Len(t, intentions, 3)
// Only intentions with linked services as a destination should be returned, and wildcard matches should be deduped
expected := []string{"postgres", "*", "redis"}
actual := []string{
intentions[0].DestinationName,
intentions[1].DestinationName,
intentions[2].DestinationName,
}
connect: intentions are now managed as a new config entry kind "service-intentions" (#8834) - Upgrade the ConfigEntry.ListAll RPC to be kind-aware so that older copies of consul will not see new config entries it doesn't understand replicate down. - Add shim conversion code so that the old API/CLI method of interacting with intentions will continue to work so long as none of these are edited via config entry endpoints. Almost all of the read-only APIs will continue to function indefinitely. - Add new APIs that operate on individual intentions without IDs so that the UI doesn't need to implement CAS operations. - Add a new serf feature flag indicating support for intentions-as-config-entries. - The old line-item intentions way of interacting with the state store will transparently flip between the legacy memdb table and the config entry representations so that readers will never see a hiccup during migration where the results are incomplete. It uses a piece of system metadata to control the flip. - The primary datacenter will begin migrating intentions into config entries on startup once all servers in the datacenter are on a version of Consul with the intentions-as-config-entries feature flag. When it is complete the old state store representations will be cleared. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up. - The secondary datacenters continue to run the old intentions replicator until all servers in the secondary DC and primary DC support intentions-as-config-entries (via serf flag). Once this condition it met the old intentions replicator ceases. - The secondary datacenters replicate the new config entries as they are migrated in the primary. When they detect that the primary has zeroed it's old state store table it waits until all config entries up to that point are replicated and then zeroes its own copy of the old state store table. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up.
2020-10-06 18:24:05 +00:00
require.ElementsMatch(t, expected, actual)
}
func TestUIEndpoint_modifySummaryForGatewayService_UseRequestedDCInsteadOfConfigured(t *testing.T) {
dc := "dc2"
cfg := config.RuntimeConfig{Datacenter: "dc1", DNSDomain: "consul"}
sum := ServiceSummary{GatewayConfig: GatewayConfig{}}
gwsvc := structs.GatewayService{Service: structs.ServiceName{Name: "test"}, Port: 42}
modifySummaryForGatewayService(&cfg, dc, &sum, &gwsvc)
2020-09-25 15:31:42 +00:00
expected := serviceCanonicalDNSName("test", "ingress", "dc2", "consul", nil) + ":42"
require.Equal(t, expected, sum.GatewayConfig.Addresses[0])
}
2020-09-30 14:23:19 +00:00
func TestUIServiceTopology(t *testing.T) {
t.Parallel()
a := NewTestAgent(t, "")
defer a.Shutdown()
// Register ingress -> api -> web -> redis
2020-09-30 14:23:19 +00:00
{
registrations := map[string]*structs.RegisterRequest{
"Node edge": {
Datacenter: "dc1",
Node: "edge",
Address: "127.0.0.20",
Checks: structs.HealthChecks{
&structs.HealthCheck{
Node: "edge",
CheckID: "edge:alive",
Name: "edge-liveness",
Status: api.HealthPassing,
},
},
},
"Ingress gateway on edge": {
Datacenter: "dc1",
Node: "edge",
SkipNodeUpdate: true,
Service: &structs.NodeService{
Kind: structs.ServiceKindIngressGateway,
ID: "ingress",
Service: "ingress",
Port: 443,
Address: "198.18.1.20",
},
},
2020-09-30 14:23:19 +00:00
"Node foo": {
Datacenter: "dc1",
Node: "foo",
Address: "127.0.0.2",
Checks: structs.HealthChecks{
&structs.HealthCheck{
Node: "foo",
CheckID: "foo:alive",
Name: "foo-liveness",
Status: api.HealthPassing,
},
},
},
"Service api on foo": {
Datacenter: "dc1",
Node: "foo",
SkipNodeUpdate: true,
Service: &structs.NodeService{
Kind: structs.ServiceKindTypical,
ID: "api",
Service: "api",
Port: 9090,
Address: "198.18.1.2",
},
Checks: structs.HealthChecks{
&structs.HealthCheck{
Node: "foo",
CheckID: "foo:api",
Name: "api-liveness",
Status: api.HealthPassing,
ServiceID: "api",
ServiceName: "api",
},
},
},
"Service api-proxy": {
Datacenter: "dc1",
Node: "foo",
SkipNodeUpdate: true,
Service: &structs.NodeService{
Kind: structs.ServiceKindConnectProxy,
ID: "api-proxy",
Service: "api-proxy",
Port: 8443,
Address: "198.18.1.2",
Proxy: structs.ConnectProxyConfig{
DestinationServiceName: "api",
Upstreams: structs.Upstreams{
{
DestinationName: "web",
LocalBindPort: 8080,
},
},
},
},
Checks: structs.HealthChecks{
&structs.HealthCheck{
Node: "foo",
CheckID: "foo:api-proxy",
Name: "api proxy listening",
Status: api.HealthPassing,
ServiceID: "api-proxy",
ServiceName: "api-proxy",
},
},
},
"Node bar": {
Datacenter: "dc1",
Node: "bar",
Address: "127.0.0.3",
Checks: structs.HealthChecks{
&structs.HealthCheck{
Node: "bar",
CheckID: "bar:alive",
Name: "bar-liveness",
Status: api.HealthPassing,
},
},
},
"Service web on bar": {
Datacenter: "dc1",
Node: "bar",
SkipNodeUpdate: true,
Service: &structs.NodeService{
Kind: structs.ServiceKindTypical,
ID: "web",
Service: "web",
Port: 80,
Address: "198.18.1.20",
},
Checks: structs.HealthChecks{
&structs.HealthCheck{
Node: "bar",
CheckID: "bar:web",
Name: "web-liveness",
Status: api.HealthWarning,
ServiceID: "web",
ServiceName: "web",
},
},
},
"Service web-proxy on bar": {
Datacenter: "dc1",
Node: "bar",
SkipNodeUpdate: true,
Service: &structs.NodeService{
Kind: structs.ServiceKindConnectProxy,
ID: "web-proxy",
Service: "web-proxy",
Port: 8443,
Address: "198.18.1.20",
Proxy: structs.ConnectProxyConfig{
DestinationServiceName: "web",
Upstreams: structs.Upstreams{
{
DestinationName: "redis",
LocalBindPort: 123,
},
},
},
},
Checks: structs.HealthChecks{
&structs.HealthCheck{
Node: "bar",
CheckID: "bar:web-proxy",
Name: "web proxy listening",
Status: api.HealthCritical,
ServiceID: "web-proxy",
ServiceName: "web-proxy",
},
},
},
"Node baz": {
Datacenter: "dc1",
Node: "baz",
Address: "127.0.0.4",
Checks: structs.HealthChecks{
&structs.HealthCheck{
Node: "baz",
CheckID: "baz:alive",
Name: "baz-liveness",
Status: api.HealthPassing,
},
},
},
"Service web on baz": {
Datacenter: "dc1",
Node: "baz",
SkipNodeUpdate: true,
Service: &structs.NodeService{
Kind: structs.ServiceKindTypical,
ID: "web",
Service: "web",
Port: 80,
Address: "198.18.1.40",
},
Checks: structs.HealthChecks{
&structs.HealthCheck{
Node: "baz",
CheckID: "baz:web",
Name: "web-liveness",
Status: api.HealthPassing,
ServiceID: "web",
ServiceName: "web",
},
},
},
"Service web-proxy on baz": {
Datacenter: "dc1",
Node: "baz",
SkipNodeUpdate: true,
Service: &structs.NodeService{
Kind: structs.ServiceKindConnectProxy,
ID: "web-proxy",
Service: "web-proxy",
Port: 8443,
Address: "198.18.1.40",
Proxy: structs.ConnectProxyConfig{
DestinationServiceName: "web",
Upstreams: structs.Upstreams{
{
DestinationName: "redis",
LocalBindPort: 123,
},
},
},
},
Checks: structs.HealthChecks{
&structs.HealthCheck{
Node: "baz",
CheckID: "baz:web-proxy",
Name: "web proxy listening",
Status: api.HealthCritical,
ServiceID: "web-proxy",
ServiceName: "web-proxy",
},
},
},
"Node zip": {
Datacenter: "dc1",
Node: "zip",
Address: "127.0.0.5",
Checks: structs.HealthChecks{
&structs.HealthCheck{
Node: "zip",
CheckID: "zip:alive",
Name: "zip-liveness",
Status: api.HealthPassing,
},
},
},
"Service redis on zip": {
Datacenter: "dc1",
Node: "zip",
SkipNodeUpdate: true,
Service: &structs.NodeService{
Kind: structs.ServiceKindTypical,
ID: "redis",
Service: "redis",
Port: 6379,
Address: "198.18.1.60",
},
Checks: structs.HealthChecks{
&structs.HealthCheck{
Node: "zip",
CheckID: "zip:redis",
Name: "redis-liveness",
Status: api.HealthPassing,
ServiceID: "redis",
ServiceName: "redis",
},
},
},
"Service redis-proxy on zip": {
Datacenter: "dc1",
Node: "zip",
SkipNodeUpdate: true,
Service: &structs.NodeService{
Kind: structs.ServiceKindConnectProxy,
ID: "redis-proxy",
Service: "redis-proxy",
Port: 8443,
Address: "198.18.1.60",
Proxy: structs.ConnectProxyConfig{
DestinationServiceName: "redis",
},
},
Checks: structs.HealthChecks{
&structs.HealthCheck{
Node: "zip",
CheckID: "zip:redis-proxy",
Name: "redis proxy listening",
Status: api.HealthCritical,
ServiceID: "redis-proxy",
ServiceName: "redis-proxy",
},
},
},
}
for _, args := range registrations {
var out struct{}
require.NoError(t, a.RPC("Catalog.Register", args, &out))
}
}
// Add intentions: deny all, ingress -> api, web -> redis with L7 perms, but omit intention for api -> web
// Add ingress config: ingress -> api
{
entries := []structs.ConfigEntryRequest{
{
Datacenter: "dc1",
Entry: &structs.ProxyConfigEntry{
Kind: structs.ProxyDefaults,
Name: structs.ProxyConfigGlobal,
Config: map[string]interface{}{
"protocol": "http",
},
},
},
{
Datacenter: "dc1",
Entry: &structs.ServiceConfigEntry{
Kind: structs.ServiceDefaults,
Name: "api",
Protocol: "tcp",
},
},
{
Datacenter: "dc1",
Entry: &structs.ServiceIntentionsConfigEntry{
Kind: structs.ServiceIntentions,
Name: "redis",
Sources: []*structs.SourceIntention{
{
Name: "web",
Permissions: []*structs.IntentionPermission{
{
Action: structs.IntentionActionAllow,
HTTP: &structs.IntentionHTTPPermission{
Methods: []string{"GET"},
},
},
},
},
},
},
},
{
Datacenter: "dc1",
Entry: &structs.ServiceIntentionsConfigEntry{
Kind: structs.ServiceIntentions,
Name: "*",
Meta: map[string]string{structs.MetaExternalSource: "nomad"},
Sources: []*structs.SourceIntention{
{
Name: "*",
Action: structs.IntentionActionDeny,
},
},
2020-09-30 14:23:19 +00:00
},
},
{
Datacenter: "dc1",
Entry: &structs.ServiceIntentionsConfigEntry{
Kind: structs.ServiceIntentions,
Name: "api",
Sources: []*structs.SourceIntention{
{
Name: "ingress",
Action: structs.IntentionActionAllow,
},
},
},
},
{
Datacenter: "dc1",
Entry: &structs.IngressGatewayConfigEntry{
Kind: "ingress-gateway",
Name: "ingress",
Listeners: []structs.IngressListener{
{
Port: 1111,
Protocol: "tcp",
Services: []structs.IngressService{
{
Name: "api",
EnterpriseMeta: *structs.DefaultEnterpriseMeta(),
},
},
},
},
},
},
2020-09-30 14:23:19 +00:00
}
for _, req := range entries {
out := false
require.NoError(t, a.RPC("ConfigEntry.Apply", &req, &out))
2020-09-30 14:23:19 +00:00
}
}
t.Run("request without kind", func(t *testing.T) {
req, _ := http.NewRequest("GET", "/v1/internal/ui/service-topology/ingress", nil)
resp := httptest.NewRecorder()
obj, err := a.srv.UIServiceTopology(resp, req)
require.Nil(t, err)
require.Nil(t, obj)
require.Equal(t, "Missing service kind", resp.Body.String())
})
t.Run("request with unsupported kind", func(t *testing.T) {
req, _ := http.NewRequest("GET", "/v1/internal/ui/service-topology/ingress?kind=not-a-kind", nil)
resp := httptest.NewRecorder()
obj, err := a.srv.UIServiceTopology(resp, req)
require.Nil(t, err)
require.Nil(t, obj)
require.Equal(t, `Unsupported service kind "not-a-kind"`, resp.Body.String())
})
t.Run("ingress", func(t *testing.T) {
retry.Run(t, func(r *retry.R) {
// Request topology for ingress
req, _ := http.NewRequest("GET", "/v1/internal/ui/service-topology/ingress?kind=ingress-gateway", nil)
resp := httptest.NewRecorder()
obj, err := a.srv.UIServiceTopology(resp, req)
assert.Nil(r, err)
require.NoError(r, checkIndex(resp))
expect := ServiceTopology{
Protocol: "tcp",
Upstreams: []*ServiceTopologySummary{
{
ServiceSummary: ServiceSummary{
Name: "api",
Datacenter: "dc1",
Nodes: []string{"foo"},
InstanceCount: 1,
ChecksPassing: 3,
EnterpriseMeta: *structs.DefaultEnterpriseMeta(),
},
Intention: structs.IntentionDecisionSummary{
Allowed: true,
HasPermissions: false,
},
},
},
FilteredByACLs: false,
}
result := obj.(ServiceTopology)
// Internal accounting that is not returned in JSON response
for _, u := range result.Upstreams {
u.externalSourceSet = nil
u.checks = nil
}
require.Equal(r, expect, result)
})
})
t.Run("api", func(t *testing.T) {
retry.Run(t, func(r *retry.R) {
// Request topology for api
req, _ := http.NewRequest("GET", "/v1/internal/ui/service-topology/api?kind=", nil)
resp := httptest.NewRecorder()
obj, err := a.srv.UIServiceTopology(resp, req)
assert.Nil(r, err)
require.NoError(r, checkIndex(resp))
expect := ServiceTopology{
Protocol: "tcp",
Downstreams: []*ServiceTopologySummary{
{
ServiceSummary: ServiceSummary{
Name: "ingress",
Kind: structs.ServiceKindIngressGateway,
Datacenter: "dc1",
Nodes: []string{"edge"},
InstanceCount: 1,
ChecksPassing: 1,
EnterpriseMeta: *structs.DefaultEnterpriseMeta(),
},
Intention: structs.IntentionDecisionSummary{
Allowed: true,
HasPermissions: false,
},
},
},
Upstreams: []*ServiceTopologySummary{
{
ServiceSummary: ServiceSummary{
Name: "web",
Datacenter: "dc1",
Nodes: []string{"bar", "baz"},
InstanceCount: 2,
ChecksPassing: 3,
ChecksWarning: 1,
ChecksCritical: 2,
EnterpriseMeta: *structs.DefaultEnterpriseMeta(),
},
Intention: structs.IntentionDecisionSummary{
Allowed: false,
HasPermissions: false,
ExternalSource: "nomad",
},
},
},
FilteredByACLs: false,
}
result := obj.(ServiceTopology)
// Internal accounting that is not returned in JSON response
for _, u := range result.Upstreams {
u.externalSourceSet = nil
u.checks = nil
}
for _, d := range result.Downstreams {
d.externalSourceSet = nil
d.checks = nil
}
require.Equal(r, expect, result)
})
2020-09-30 14:23:19 +00:00
})
t.Run("web", func(t *testing.T) {
retry.Run(t, func(r *retry.R) {
// Request topology for web
req, _ := http.NewRequest("GET", "/v1/internal/ui/service-topology/web?kind=", nil)
resp := httptest.NewRecorder()
obj, err := a.srv.UIServiceTopology(resp, req)
assert.Nil(r, err)
require.NoError(r, checkIndex(resp))
expect := ServiceTopology{
Protocol: "http",
Upstreams: []*ServiceTopologySummary{
{
ServiceSummary: ServiceSummary{
Name: "redis",
Datacenter: "dc1",
Nodes: []string{"zip"},
InstanceCount: 1,
ChecksPassing: 2,
ChecksCritical: 1,
EnterpriseMeta: *structs.DefaultEnterpriseMeta(),
},
Intention: structs.IntentionDecisionSummary{
Allowed: false,
HasPermissions: true,
},
},
2020-09-30 14:23:19 +00:00
},
Downstreams: []*ServiceTopologySummary{
{
ServiceSummary: ServiceSummary{
Name: "api",
Datacenter: "dc1",
Nodes: []string{"foo"},
InstanceCount: 1,
ChecksPassing: 3,
EnterpriseMeta: *structs.DefaultEnterpriseMeta(),
},
Intention: structs.IntentionDecisionSummary{
Allowed: false,
HasPermissions: false,
ExternalSource: "nomad",
},
},
2020-09-30 14:23:19 +00:00
},
FilteredByACLs: false,
}
result := obj.(ServiceTopology)
2020-09-30 14:23:19 +00:00
// Internal accounting that is not returned in JSON response
for _, u := range result.Upstreams {
u.externalSourceSet = nil
u.checks = nil
}
for _, d := range result.Downstreams {
d.externalSourceSet = nil
d.checks = nil
}
require.Equal(r, expect, result)
})
2020-09-30 14:23:19 +00:00
})
t.Run("redis", func(t *testing.T) {
retry.Run(t, func(r *retry.R) {
// Request topology for redis
req, _ := http.NewRequest("GET", "/v1/internal/ui/service-topology/redis?kind=", nil)
resp := httptest.NewRecorder()
obj, err := a.srv.UIServiceTopology(resp, req)
assert.Nil(r, err)
require.NoError(r, checkIndex(resp))
expect := ServiceTopology{
Protocol: "http",
Downstreams: []*ServiceTopologySummary{
{
ServiceSummary: ServiceSummary{
Name: "web",
Datacenter: "dc1",
Nodes: []string{"bar", "baz"},
InstanceCount: 2,
ChecksPassing: 3,
ChecksWarning: 1,
ChecksCritical: 2,
EnterpriseMeta: *structs.DefaultEnterpriseMeta(),
},
Intention: structs.IntentionDecisionSummary{
Allowed: false,
HasPermissions: true,
},
},
2020-09-30 14:23:19 +00:00
},
FilteredByACLs: false,
}
result := obj.(ServiceTopology)
2020-09-30 14:23:19 +00:00
// Internal accounting that is not returned in JSON response
for _, d := range result.Downstreams {
d.externalSourceSet = nil
d.checks = nil
}
require.Equal(r, expect, result)
})
2020-09-30 14:23:19 +00:00
})
}
func TestUIEndpoint_MetricsProxy(t *testing.T) {
t.Parallel()
var lastHeadersSent atomic.Value
backendH := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
lastHeadersSent.Store(r.Header)
if r.URL.Path == "/some/prefix/ok" {
w.Header().Set("X-Random-Header", "Foo")
w.Write([]byte("OK"))
return
}
2020-10-09 11:25:17 +00:00
if r.URL.Path == "/some/prefix/query-echo" {
w.Write([]byte("RawQuery: " + r.URL.RawQuery))
return
}
if r.URL.Path == "/.passwd" {
w.Write([]byte("SECRETS!"))
return
}
http.Error(w, "not found on backend", http.StatusNotFound)
})
backend := httptest.NewServer(backendH)
defer backend.Close()
backendURL := backend.URL + "/some/prefix"
// Share one agent for all these test cases. This has a few nice side-effects:
// 1. it's cheaper
// 2. it implicitly tests that config reloading works between cases
//
// Note we can't test the case where UI is disabled though as that's not
// reloadable so we'll do that in a separate test below rather than have many
// new tests all with a new agent. response headers also aren't reloadable
// currently due to the way we wrap API endpoints at startup.
a := NewTestAgent(t, `
ui_config {
enabled = true
}
http_config {
response_headers {
"Access-Control-Allow-Origin" = "*"
}
}
`)
defer a.Shutdown()
endpointPath := "/v1/internal/ui/metrics-proxy"
cases := []struct {
name string
config config.UIMetricsProxy
path string
wantCode int
wantContains string
wantHeaders map[string]string
wantHeadersSent map[string]string
}{
{
name: "disabled",
config: config.UIMetricsProxy{},
path: endpointPath + "/ok",
wantCode: http.StatusNotFound,
},
{
name: "basic proxying",
config: config.UIMetricsProxy{
BaseURL: backendURL,
},
path: endpointPath + "/ok",
wantCode: http.StatusOK,
wantContains: "OK",
wantHeaders: map[string]string{
"X-Random-Header": "Foo",
},
},
{
name: "404 on backend",
config: config.UIMetricsProxy{
BaseURL: backendURL,
},
path: endpointPath + "/random-path",
wantCode: http.StatusNotFound,
wantContains: "not found on backend",
},
{
// Note that this case actually doesn't exercise our validation logic at
// all since the top level API mux resolves this to /v1/internal/.passwd
// and it never hits our handler at all. I left it in though as this
// wasn't obvious and it's worth knowing if we change something in our mux
// that might affect path traversal opportunity here. In fact this makes
// our path traversal handling somewhat redundant because any traversal
// that goes "back" far enough to traverse up from the BaseURL of the
// proxy target will in fact miss our handler entirely. It's still better
// to be safe than sorry though.
name: "path traversal should fail - api mux",
config: config.UIMetricsProxy{
BaseURL: backendURL,
},
path: endpointPath + "/../../.passwd",
wantCode: http.StatusMovedPermanently,
wantContains: "Moved Permanently",
},
{
name: "adding auth header",
config: config.UIMetricsProxy{
BaseURL: backendURL,
AddHeaders: []config.UIMetricsProxyAddHeader{
{
Name: "Authorization",
Value: "SECRET_KEY",
},
{
Name: "X-Some-Other-Header",
Value: "foo",
},
},
},
path: endpointPath + "/ok",
wantCode: http.StatusOK,
wantContains: "OK",
wantHeaders: map[string]string{
"X-Random-Header": "Foo",
},
wantHeadersSent: map[string]string{
"X-Some-Other-Header": "foo",
"Authorization": "SECRET_KEY",
},
},
2020-10-09 11:25:17 +00:00
{
name: "passes through query params",
config: config.UIMetricsProxy{
BaseURL: backendURL,
},
// encoded=test[0]&&test[1]==!@£$%^
path: endpointPath + "/query-echo?foo=bar&encoded=test%5B0%5D%26%26test%5B1%5D%3D%3D%21%40%C2%A3%24%25%5E",
wantCode: http.StatusOK,
wantContains: "RawQuery: foo=bar&encoded=test%5B0%5D%26%26test%5B1%5D%3D%3D%21%40%C2%A3%24%25%5E",
},
}
for _, tc := range cases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
// Reload the agent config with the desired UI config by making a copy and
// using internal reload.
cfg := *a.Agent.config
// Modify the UIConfig part (this is a copy remember and that struct is
// not a pointer)
cfg.UIConfig.MetricsProxy = tc.config
require.NoError(t, a.Agent.reloadConfigInternal(&cfg))
// Now fetch the API handler to run requests against
h := a.srv.handler(true)
req := httptest.NewRequest("GET", tc.path, nil)
rec := httptest.NewRecorder()
h.ServeHTTP(rec, req)
require.Equal(t, tc.wantCode, rec.Code,
"Wrong status code. Body = %s", rec.Body.String())
require.Contains(t, rec.Body.String(), tc.wantContains)
for k, v := range tc.wantHeaders {
// Headers are a slice of values, just assert that one of the values is
// the one we want.
require.Contains(t, rec.Result().Header[k], v)
}
if len(tc.wantHeadersSent) > 0 {
headersSent, ok := lastHeadersSent.Load().(http.Header)
require.True(t, ok, "backend not called")
for k, v := range tc.wantHeadersSent {
require.Contains(t, headersSent[k], v,
"header %s doesn't have the right value set", k)
}
}
})
}
}