mirror of https://github.com/status-im/consul.git
separating usage from overview content
This commit is contained in:
commit
e2266e5a39
|
@ -0,0 +1,3 @@
|
||||||
|
```release-note:bug
|
||||||
|
checks: populate interval and timeout when registering services
|
||||||
|
```
|
|
@ -0,0 +1,3 @@
|
||||||
|
```release-note:feature
|
||||||
|
ca: support using an external root CA with the vault CA provider
|
||||||
|
```
|
|
@ -0,0 +1,3 @@
|
||||||
|
```release-note:feature
|
||||||
|
ui: Support connect-native services in the Topology view.
|
||||||
|
```
|
|
@ -0,0 +1,3 @@
|
||||||
|
```release-note:improvement
|
||||||
|
rpc: improve blocking queries for items that do not exist, by continuing to block until they exist (or the timeout).
|
||||||
|
```
|
|
@ -0,0 +1,3 @@
|
||||||
|
```release-note:enhancement
|
||||||
|
ui: Improve usability of Topology warning/information panels
|
||||||
|
```
|
|
@ -0,0 +1,3 @@
|
||||||
|
```release-note:bug
|
||||||
|
ui: Ensure we always display the Policy default preview in the Namespace editing form
|
||||||
|
```
|
|
@ -0,0 +1,3 @@
|
||||||
|
```release-note:enhancement
|
||||||
|
ui: Slightly improve usability of main navigation
|
||||||
|
```
|
|
@ -0,0 +1,3 @@
|
||||||
|
```release-note:bug
|
||||||
|
agent: Parse datacenter from Create/Delete requests for AuthMethods and BindingRules.
|
||||||
|
```
|
|
@ -0,0 +1,3 @@
|
||||||
|
```release-note:enhancement
|
||||||
|
ci: include 'enhancement' entry type in IMPROVEMENTS section of changelog.
|
||||||
|
```
|
|
@ -0,0 +1,3 @@
|
||||||
|
```release-note:bug
|
||||||
|
xds: Fixed Envoy http features such as outlier detection and retry policy not working correctly with transparent proxy.
|
||||||
|
```
|
|
@ -22,10 +22,11 @@ FEATURES:
|
||||||
{{ end -}}
|
{{ end -}}
|
||||||
{{- end -}}
|
{{- end -}}
|
||||||
|
|
||||||
{{- if .NotesByType.improvement }}
|
{{- $improvements := combineTypes .NotesByType.improvement .NotesByType.enhancement -}}
|
||||||
|
{{- if $improvements }}
|
||||||
IMPROVEMENTS:
|
IMPROVEMENTS:
|
||||||
|
|
||||||
{{range .NotesByType.improvement -}}
|
{{range $improvements | sort -}}
|
||||||
* {{ template "note" . }}
|
* {{ template "note" . }}
|
||||||
{{ end -}}
|
{{ end -}}
|
||||||
{{- end -}}
|
{{- end -}}
|
||||||
|
|
14
CHANGELOG.md
14
CHANGELOG.md
|
@ -4,6 +4,8 @@ IMPROVEMENTS:
|
||||||
|
|
||||||
* connect: update Envoy supported version of 1.20 to 1.20.1 [[GH-11895](https://github.com/hashicorp/consul/issues/11895)]
|
* connect: update Envoy supported version of 1.20 to 1.20.1 [[GH-11895](https://github.com/hashicorp/consul/issues/11895)]
|
||||||
* sentinel: **(Enterprise Only)** Sentinel now uses SHA256 to generate policy ids
|
* sentinel: **(Enterprise Only)** Sentinel now uses SHA256 to generate policy ids
|
||||||
|
* streaming: Improved performance when the server is handling many concurrent subscriptions and has a high number of CPU cores [[GH-12080](https://github.com/hashicorp/consul/issues/12080)]
|
||||||
|
* systemd: Support starting/stopping the systemd service for linux packages when the optional EnvironmentFile does not exist. [[GH-12176](https://github.com/hashicorp/consul/issues/12176)]
|
||||||
|
|
||||||
BUG FIXES:
|
BUG FIXES:
|
||||||
|
|
||||||
|
@ -29,9 +31,11 @@ FEATURES:
|
||||||
|
|
||||||
IMPROVEMENTS:
|
IMPROVEMENTS:
|
||||||
|
|
||||||
|
* api: URL-encode/decode resource names for v1/agent endpoints in API. [[GH-11335](https://github.com/hashicorp/consul/issues/11335)]
|
||||||
* api: Return 404 when de-registering a non-existent check [[GH-11950](https://github.com/hashicorp/consul/issues/11950)]
|
* api: Return 404 when de-registering a non-existent check [[GH-11950](https://github.com/hashicorp/consul/issues/11950)]
|
||||||
* connect: Add support for connecting to services behind a terminating gateway when using a transparent proxy. [[GH-12049](https://github.com/hashicorp/consul/issues/12049)]
|
* connect: Add support for connecting to services behind a terminating gateway when using a transparent proxy. [[GH-12049](https://github.com/hashicorp/consul/issues/12049)]
|
||||||
* http: when a user attempts to access the UI but can't because it's disabled, explain this and how to fix it [[GH-11820](https://github.com/hashicorp/consul/issues/11820)]
|
* http: when a user attempts to access the UI but can't because it's disabled, explain this and how to fix it [[GH-11820](https://github.com/hashicorp/consul/issues/11820)]
|
||||||
|
* raft: Consul leaders will attempt to transfer leadership to another server as part of gracefully leaving the cluster. [[GH-11376](https://github.com/hashicorp/consul/issues/11376)]
|
||||||
* ui: Added a notice for non-primary intention creation [[GH-11985](https://github.com/hashicorp/consul/issues/11985)]
|
* ui: Added a notice for non-primary intention creation [[GH-11985](https://github.com/hashicorp/consul/issues/11985)]
|
||||||
|
|
||||||
BUG FIXES:
|
BUG FIXES:
|
||||||
|
@ -102,11 +106,14 @@ FEATURES:
|
||||||
|
|
||||||
IMPROVEMENTS:
|
IMPROVEMENTS:
|
||||||
|
|
||||||
|
* acls: Show AuthMethodNamespace when reading/listing ACL tokens. [[GH-10598](https://github.com/hashicorp/consul/issues/10598)]
|
||||||
* acl: replication routine to report the last error message. [[GH-10612](https://github.com/hashicorp/consul/issues/10612)]
|
* acl: replication routine to report the last error message. [[GH-10612](https://github.com/hashicorp/consul/issues/10612)]
|
||||||
* agent: add variation of force-leave that exclusively works on the WAN [[GH-11722](https://github.com/hashicorp/consul/issues/11722)]
|
* agent: add variation of force-leave that exclusively works on the WAN [[GH-11722](https://github.com/hashicorp/consul/issues/11722)]
|
||||||
* api: Enable setting query options on agent health and maintenance endpoints. [[GH-10691](https://github.com/hashicorp/consul/issues/10691)]
|
* api: Enable setting query options on agent health and maintenance endpoints. [[GH-10691](https://github.com/hashicorp/consul/issues/10691)]
|
||||||
|
* api: responses that contain only a partial subset of results, due to filtering by ACL policies, may now include an `X-Consul-Results-Filtered-By-ACLs` header [[GH-11569](https://github.com/hashicorp/consul/issues/11569)]
|
||||||
* checks: add failures_before_warning setting for interval checks. [[GH-10969](https://github.com/hashicorp/consul/issues/10969)]
|
* checks: add failures_before_warning setting for interval checks. [[GH-10969](https://github.com/hashicorp/consul/issues/10969)]
|
||||||
* ci: Upgrade to use Go 1.17.5 [[GH-11799](https://github.com/hashicorp/consul/issues/11799)]
|
* ci: Upgrade to use Go 1.17.5 [[GH-11799](https://github.com/hashicorp/consul/issues/11799)]
|
||||||
|
* ci: Allow configuring graceful stop in testutil. [[GH-10566](https://github.com/hashicorp/consul/issues/10566)]
|
||||||
* cli: Add `-cas` and `-modify-index` flags to the `consul config delete` command to support Check-And-Set (CAS) deletion of config entries [[GH-11419](https://github.com/hashicorp/consul/issues/11419)]
|
* cli: Add `-cas` and `-modify-index` flags to the `consul config delete` command to support Check-And-Set (CAS) deletion of config entries [[GH-11419](https://github.com/hashicorp/consul/issues/11419)]
|
||||||
* config: **(Enterprise Only)** Allow specifying permission mode for audit logs. [[GH-10732](https://github.com/hashicorp/consul/issues/10732)]
|
* config: **(Enterprise Only)** Allow specifying permission mode for audit logs. [[GH-10732](https://github.com/hashicorp/consul/issues/10732)]
|
||||||
* config: Support Check-And-Set (CAS) deletion of config entries [[GH-11419](https://github.com/hashicorp/consul/issues/11419)]
|
* config: Support Check-And-Set (CAS) deletion of config entries [[GH-11419](https://github.com/hashicorp/consul/issues/11419)]
|
||||||
|
@ -115,9 +122,7 @@ IMPROVEMENTS:
|
||||||
* connect/ca: cease including the common name field in generated x509 non-CA certificates [[GH-10424](https://github.com/hashicorp/consul/issues/10424)]
|
* connect/ca: cease including the common name field in generated x509 non-CA certificates [[GH-10424](https://github.com/hashicorp/consul/issues/10424)]
|
||||||
* connect: Add low-level feature to allow an Ingress to retrieve TLS certificates from SDS. [[GH-10903](https://github.com/hashicorp/consul/issues/10903)]
|
* connect: Add low-level feature to allow an Ingress to retrieve TLS certificates from SDS. [[GH-10903](https://github.com/hashicorp/consul/issues/10903)]
|
||||||
* connect: Consul will now generate a unique virtual IP for each connect-enabled service (this will also differ across namespace/partition in Enterprise). [[GH-11724](https://github.com/hashicorp/consul/issues/11724)]
|
* connect: Consul will now generate a unique virtual IP for each connect-enabled service (this will also differ across namespace/partition in Enterprise). [[GH-11724](https://github.com/hashicorp/consul/issues/11724)]
|
||||||
* connect: Support Vault auth methods for the Connect CA Vault provider. Currently, we support any non-deprecated auth methods
|
* connect: Support Vault auth methods for the Connect CA Vault provider. Currently, we support any non-deprecated auth methods the latest version of Vault supports (v1.8.5), which include AppRole, AliCloud, AWS, Azure, Cloud Foundry, GitHub, Google Cloud, JWT/OIDC, Kerberos, Kubernetes, LDAP, Oracle Cloud Infrastructure, Okta, Radius, TLS Certificates, and Username & Password. [[GH-11573](https://github.com/hashicorp/consul/issues/11573)]
|
||||||
the latest version of Vault supports (v1.8.5), which include AppRole, AliCloud, AWS, Azure, Cloud Foundry, GitHub, Google Cloud,
|
|
||||||
JWT/OIDC, Kerberos, Kubernetes, LDAP, Oracle Cloud Infrastructure, Okta, Radius, TLS Certificates, and Username & Password. [[GH-11573](https://github.com/hashicorp/consul/issues/11573)]
|
|
||||||
* connect: Support manipulating HTTP headers in the mesh. [[GH-10613](https://github.com/hashicorp/consul/issues/10613)]
|
* connect: Support manipulating HTTP headers in the mesh. [[GH-10613](https://github.com/hashicorp/consul/issues/10613)]
|
||||||
* connect: add Namespace configuration setting for Vault CA provider [[GH-11477](https://github.com/hashicorp/consul/issues/11477)]
|
* connect: add Namespace configuration setting for Vault CA provider [[GH-11477](https://github.com/hashicorp/consul/issues/11477)]
|
||||||
* connect: ingress gateways may now enable built-in TLS for a subset of listeners. [[GH-11163](https://github.com/hashicorp/consul/issues/11163)]
|
* connect: ingress gateways may now enable built-in TLS for a subset of listeners. [[GH-11163](https://github.com/hashicorp/consul/issues/11163)]
|
||||||
|
@ -135,7 +140,9 @@ JWT/OIDC, Kerberos, Kubernetes, LDAP, Oracle Cloud Infrastructure, Okta, Radius,
|
||||||
* segments: **(Enterprise only)** ensure that the serf_lan_allowed_cidrs applies to network segments [[GH-11495](https://github.com/hashicorp/consul/issues/11495)]
|
* segments: **(Enterprise only)** ensure that the serf_lan_allowed_cidrs applies to network segments [[GH-11495](https://github.com/hashicorp/consul/issues/11495)]
|
||||||
* telemetry: add a new `agent.tls.cert.expiry` metric for tracking when the Agent TLS certificate expires. [[GH-10768](https://github.com/hashicorp/consul/issues/10768)]
|
* telemetry: add a new `agent.tls.cert.expiry` metric for tracking when the Agent TLS certificate expires. [[GH-10768](https://github.com/hashicorp/consul/issues/10768)]
|
||||||
* telemetry: add a new `mesh.active-root-ca.expiry` metric for tracking when the root certificate expires. [[GH-9924](https://github.com/hashicorp/consul/issues/9924)]
|
* telemetry: add a new `mesh.active-root-ca.expiry` metric for tracking when the root certificate expires. [[GH-9924](https://github.com/hashicorp/consul/issues/9924)]
|
||||||
|
* telemetry: added metrics to track certificates expiry. [[GH-10504](https://github.com/hashicorp/consul/issues/10504)]
|
||||||
* types: add TLSVersion and TLSCipherSuite [[GH-11645](https://github.com/hashicorp/consul/issues/11645)]
|
* types: add TLSVersion and TLSCipherSuite [[GH-11645](https://github.com/hashicorp/consul/issues/11645)]
|
||||||
|
* ui: Change partition URL segment prefix from `-` to `_` [[GH-11801](https://github.com/hashicorp/consul/issues/11801)]
|
||||||
* ui: Add upstream icons for upstreams and upstream instances [[GH-11556](https://github.com/hashicorp/consul/issues/11556)]
|
* ui: Add upstream icons for upstreams and upstream instances [[GH-11556](https://github.com/hashicorp/consul/issues/11556)]
|
||||||
* ui: Add uri guard to prevent future URL encoding issues [[GH-11117](https://github.com/hashicorp/consul/issues/11117)]
|
* ui: Add uri guard to prevent future URL encoding issues [[GH-11117](https://github.com/hashicorp/consul/issues/11117)]
|
||||||
* ui: Move the majority of our SASS variables to use native CSS custom
|
* ui: Move the majority of our SASS variables to use native CSS custom
|
||||||
|
@ -207,6 +214,7 @@ SECURITY:
|
||||||
|
|
||||||
IMPROVEMENTS:
|
IMPROVEMENTS:
|
||||||
|
|
||||||
|
* raft: Consul leaders will attempt to transfer leadership to another server as part of gracefully leaving the cluster. [[GH-11376](https://github.com/hashicorp/consul/issues/11376)]
|
||||||
* sentinel: **(Enterprise Only)** Sentinel now uses SHA256 to generate policy ids
|
* sentinel: **(Enterprise Only)** Sentinel now uses SHA256 to generate policy ids
|
||||||
|
|
||||||
BUG FIXES:
|
BUG FIXES:
|
||||||
|
|
|
@ -751,9 +751,8 @@ func (s *HTTPHandlers) ACLBindingRuleCreate(resp http.ResponseWriter, req *http.
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *HTTPHandlers) ACLBindingRuleWrite(resp http.ResponseWriter, req *http.Request, bindingRuleID string) (interface{}, error) {
|
func (s *HTTPHandlers) ACLBindingRuleWrite(resp http.ResponseWriter, req *http.Request, bindingRuleID string) (interface{}, error) {
|
||||||
args := structs.ACLBindingRuleSetRequest{
|
args := structs.ACLBindingRuleSetRequest{}
|
||||||
Datacenter: s.agent.config.Datacenter,
|
s.parseDC(req, &args.Datacenter)
|
||||||
}
|
|
||||||
s.parseToken(req, &args.Token)
|
s.parseToken(req, &args.Token)
|
||||||
if err := s.parseEntMeta(req, &args.BindingRule.EnterpriseMeta); err != nil {
|
if err := s.parseEntMeta(req, &args.BindingRule.EnterpriseMeta); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
|
@ -779,9 +778,9 @@ func (s *HTTPHandlers) ACLBindingRuleWrite(resp http.ResponseWriter, req *http.R
|
||||||
|
|
||||||
func (s *HTTPHandlers) ACLBindingRuleDelete(resp http.ResponseWriter, req *http.Request, bindingRuleID string) (interface{}, error) {
|
func (s *HTTPHandlers) ACLBindingRuleDelete(resp http.ResponseWriter, req *http.Request, bindingRuleID string) (interface{}, error) {
|
||||||
args := structs.ACLBindingRuleDeleteRequest{
|
args := structs.ACLBindingRuleDeleteRequest{
|
||||||
Datacenter: s.agent.config.Datacenter,
|
|
||||||
BindingRuleID: bindingRuleID,
|
BindingRuleID: bindingRuleID,
|
||||||
}
|
}
|
||||||
|
s.parseDC(req, &args.Datacenter)
|
||||||
s.parseToken(req, &args.Token)
|
s.parseToken(req, &args.Token)
|
||||||
if err := s.parseEntMeta(req, &args.EnterpriseMeta); err != nil {
|
if err := s.parseEntMeta(req, &args.EnterpriseMeta); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
|
@ -898,9 +897,8 @@ func (s *HTTPHandlers) ACLAuthMethodCreate(resp http.ResponseWriter, req *http.R
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *HTTPHandlers) ACLAuthMethodWrite(resp http.ResponseWriter, req *http.Request, methodName string) (interface{}, error) {
|
func (s *HTTPHandlers) ACLAuthMethodWrite(resp http.ResponseWriter, req *http.Request, methodName string) (interface{}, error) {
|
||||||
args := structs.ACLAuthMethodSetRequest{
|
args := structs.ACLAuthMethodSetRequest{}
|
||||||
Datacenter: s.agent.config.Datacenter,
|
s.parseDC(req, &args.Datacenter)
|
||||||
}
|
|
||||||
s.parseToken(req, &args.Token)
|
s.parseToken(req, &args.Token)
|
||||||
if err := s.parseEntMeta(req, &args.AuthMethod.EnterpriseMeta); err != nil {
|
if err := s.parseEntMeta(req, &args.AuthMethod.EnterpriseMeta); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
|
@ -929,9 +927,9 @@ func (s *HTTPHandlers) ACLAuthMethodWrite(resp http.ResponseWriter, req *http.Re
|
||||||
|
|
||||||
func (s *HTTPHandlers) ACLAuthMethodDelete(resp http.ResponseWriter, req *http.Request, methodName string) (interface{}, error) {
|
func (s *HTTPHandlers) ACLAuthMethodDelete(resp http.ResponseWriter, req *http.Request, methodName string) (interface{}, error) {
|
||||||
args := structs.ACLAuthMethodDeleteRequest{
|
args := structs.ACLAuthMethodDeleteRequest{
|
||||||
Datacenter: s.agent.config.Datacenter,
|
|
||||||
AuthMethodName: methodName,
|
AuthMethodName: methodName,
|
||||||
}
|
}
|
||||||
|
s.parseDC(req, &args.Datacenter)
|
||||||
s.parseToken(req, &args.Token)
|
s.parseToken(req, &args.Token)
|
||||||
if err := s.parseEntMeta(req, &args.EnterpriseMeta); err != nil {
|
if err := s.parseEntMeta(req, &args.EnterpriseMeta); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
|
|
|
@ -1222,6 +1222,26 @@ func TestACL_LoginProcedure_HTTP(t *testing.T) {
|
||||||
methodMap[method.Name] = method
|
methodMap[method.Name] = method
|
||||||
})
|
})
|
||||||
|
|
||||||
|
t.Run("Create in remote datacenter", func(t *testing.T) {
|
||||||
|
methodInput := &structs.ACLAuthMethod{
|
||||||
|
Name: "other",
|
||||||
|
Type: "testing",
|
||||||
|
Description: "test",
|
||||||
|
Config: map[string]interface{}{
|
||||||
|
"SessionID": testSessionID,
|
||||||
|
},
|
||||||
|
TokenLocality: "global",
|
||||||
|
MaxTokenTTL: 500_000_000_000,
|
||||||
|
}
|
||||||
|
|
||||||
|
req, _ := http.NewRequest("PUT", "/v1/acl/auth-method?token=root&dc=remote", jsonBody(methodInput))
|
||||||
|
resp := httptest.NewRecorder()
|
||||||
|
_, err := a.srv.ACLAuthMethodCRUD(resp, req)
|
||||||
|
require.Error(t, err)
|
||||||
|
_, ok := err.(BadRequestError)
|
||||||
|
require.True(t, ok)
|
||||||
|
})
|
||||||
|
|
||||||
t.Run("Update Name URL Mismatch", func(t *testing.T) {
|
t.Run("Update Name URL Mismatch", func(t *testing.T) {
|
||||||
methodInput := &structs.ACLAuthMethod{
|
methodInput := &structs.ACLAuthMethod{
|
||||||
Name: "test",
|
Name: "test",
|
||||||
|
@ -1394,6 +1414,21 @@ func TestACL_LoginProcedure_HTTP(t *testing.T) {
|
||||||
ruleMap[rule.ID] = rule
|
ruleMap[rule.ID] = rule
|
||||||
})
|
})
|
||||||
|
|
||||||
|
t.Run("Create in remote datacenter", func(t *testing.T) {
|
||||||
|
ruleInput := &structs.ACLBindingRule{
|
||||||
|
Description: "other",
|
||||||
|
AuthMethod: "test",
|
||||||
|
Selector: "serviceaccount.namespace==default",
|
||||||
|
BindType: structs.BindingRuleBindTypeRole,
|
||||||
|
BindName: "fancy-role",
|
||||||
|
}
|
||||||
|
|
||||||
|
req, _ := http.NewRequest("PUT", "/v1/acl/binding-rule?token=root&dc=remote", jsonBody(ruleInput))
|
||||||
|
resp := httptest.NewRecorder()
|
||||||
|
_, err := a.srv.ACLBindingRuleCRUD(resp, req)
|
||||||
|
require.EqualError(t, err, "No path to datacenter")
|
||||||
|
})
|
||||||
|
|
||||||
t.Run("BindingRule CRUD Missing ID in URL", func(t *testing.T) {
|
t.Run("BindingRule CRUD Missing ID in URL", func(t *testing.T) {
|
||||||
req, _ := http.NewRequest("GET", "/v1/acl/binding-rule/?token=root", nil)
|
req, _ := http.NewRequest("GET", "/v1/acl/binding-rule/?token=root", nil)
|
||||||
resp := httptest.NewRecorder()
|
resp := httptest.NewRecorder()
|
||||||
|
|
|
@ -2120,10 +2120,22 @@ func (a *Agent) addServiceInternal(req addServiceInternalRequest) error {
|
||||||
if name == "" {
|
if name == "" {
|
||||||
name = fmt.Sprintf("Service '%s' check", service.Service)
|
name = fmt.Sprintf("Service '%s' check", service.Service)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
var intervalStr string
|
||||||
|
var timeoutStr string
|
||||||
|
if chkType.Interval != 0 {
|
||||||
|
intervalStr = chkType.Interval.String()
|
||||||
|
}
|
||||||
|
if chkType.Timeout != 0 {
|
||||||
|
timeoutStr = chkType.Interval.String()
|
||||||
|
}
|
||||||
|
|
||||||
check := &structs.HealthCheck{
|
check := &structs.HealthCheck{
|
||||||
Node: a.config.NodeName,
|
Node: a.config.NodeName,
|
||||||
CheckID: types.CheckID(checkID),
|
CheckID: types.CheckID(checkID),
|
||||||
Name: name,
|
Name: name,
|
||||||
|
Interval: intervalStr,
|
||||||
|
Timeout: timeoutStr,
|
||||||
Status: api.HealthCritical,
|
Status: api.HealthCritical,
|
||||||
Notes: chkType.Notes,
|
Notes: chkType.Notes,
|
||||||
ServiceID: service.ID,
|
ServiceID: service.ID,
|
||||||
|
|
|
@ -452,6 +452,8 @@ func testAgent_AddService(t *testing.T, extraHCL string) {
|
||||||
Node: "node1",
|
Node: "node1",
|
||||||
CheckID: "check1",
|
CheckID: "check1",
|
||||||
Name: "name1",
|
Name: "name1",
|
||||||
|
Interval: "",
|
||||||
|
Timeout: "", // these are empty because a TTL was provided
|
||||||
Status: "critical",
|
Status: "critical",
|
||||||
Notes: "note1",
|
Notes: "note1",
|
||||||
ServiceID: "svcid1",
|
ServiceID: "svcid1",
|
||||||
|
@ -500,6 +502,8 @@ func testAgent_AddService(t *testing.T, extraHCL string) {
|
||||||
Node: "node1",
|
Node: "node1",
|
||||||
CheckID: "check1",
|
CheckID: "check1",
|
||||||
Name: "name1",
|
Name: "name1",
|
||||||
|
Interval: "",
|
||||||
|
Timeout: "", // these are empty bcause a TTL was provided
|
||||||
Status: "critical",
|
Status: "critical",
|
||||||
Notes: "note1",
|
Notes: "note1",
|
||||||
ServiceID: "svcid2",
|
ServiceID: "svcid2",
|
||||||
|
@ -512,6 +516,8 @@ func testAgent_AddService(t *testing.T, extraHCL string) {
|
||||||
Node: "node1",
|
Node: "node1",
|
||||||
CheckID: "check-noname",
|
CheckID: "check-noname",
|
||||||
Name: "Service 'svcname2' check",
|
Name: "Service 'svcname2' check",
|
||||||
|
Interval: "",
|
||||||
|
Timeout: "", // these are empty because a TTL was provided
|
||||||
Status: "critical",
|
Status: "critical",
|
||||||
ServiceID: "svcid2",
|
ServiceID: "svcid2",
|
||||||
ServiceName: "svcname2",
|
ServiceName: "svcname2",
|
||||||
|
@ -523,6 +529,8 @@ func testAgent_AddService(t *testing.T, extraHCL string) {
|
||||||
Node: "node1",
|
Node: "node1",
|
||||||
CheckID: "service:svcid2:3",
|
CheckID: "service:svcid2:3",
|
||||||
Name: "check-noid",
|
Name: "check-noid",
|
||||||
|
Interval: "",
|
||||||
|
Timeout: "", // these are empty becuase a TTL was provided
|
||||||
Status: "critical",
|
Status: "critical",
|
||||||
ServiceID: "svcid2",
|
ServiceID: "svcid2",
|
||||||
ServiceName: "svcname2",
|
ServiceName: "svcname2",
|
||||||
|
@ -534,6 +542,8 @@ func testAgent_AddService(t *testing.T, extraHCL string) {
|
||||||
Node: "node1",
|
Node: "node1",
|
||||||
CheckID: "service:svcid2:4",
|
CheckID: "service:svcid2:4",
|
||||||
Name: "Service 'svcname2' check",
|
Name: "Service 'svcname2' check",
|
||||||
|
Interval: "",
|
||||||
|
Timeout: "", // these are empty because a TTL was provided
|
||||||
Status: "critical",
|
Status: "critical",
|
||||||
ServiceID: "svcid2",
|
ServiceID: "svcid2",
|
||||||
ServiceName: "svcname2",
|
ServiceName: "svcname2",
|
||||||
|
|
|
@ -0,0 +1,249 @@
|
||||||
|
package config
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/fsnotify/fsnotify"
|
||||||
|
"github.com/hashicorp/go-hclog"
|
||||||
|
)
|
||||||
|
|
||||||
|
const timeoutDuration = 200 * time.Millisecond
|
||||||
|
|
||||||
|
type FileWatcher struct {
|
||||||
|
watcher *fsnotify.Watcher
|
||||||
|
configFiles map[string]*watchedFile
|
||||||
|
logger hclog.Logger
|
||||||
|
reconcileTimeout time.Duration
|
||||||
|
cancel context.CancelFunc
|
||||||
|
done chan interface{}
|
||||||
|
stopOnce sync.Once
|
||||||
|
|
||||||
|
//EventsCh Channel where an event will be emitted when a file change is detected
|
||||||
|
// a call to Start is needed before any event is emitted
|
||||||
|
// after a Call to Stop succeed, the channel will be closed
|
||||||
|
EventsCh chan *FileWatcherEvent
|
||||||
|
}
|
||||||
|
|
||||||
|
type watchedFile struct {
|
||||||
|
modTime time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
type FileWatcherEvent struct {
|
||||||
|
Filename string
|
||||||
|
}
|
||||||
|
|
||||||
|
//NewFileWatcher create a file watcher that will watch all the files/folders from configFiles
|
||||||
|
// if success a FileWatcher will be returned and a nil error
|
||||||
|
// otherwise an error and a nil FileWatcher are returned
|
||||||
|
func NewFileWatcher(configFiles []string, logger hclog.Logger) (*FileWatcher, error) {
|
||||||
|
ws, err := fsnotify.NewWatcher()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
w := &FileWatcher{
|
||||||
|
watcher: ws,
|
||||||
|
logger: logger.Named("file-watcher"),
|
||||||
|
configFiles: make(map[string]*watchedFile),
|
||||||
|
EventsCh: make(chan *FileWatcherEvent),
|
||||||
|
reconcileTimeout: timeoutDuration,
|
||||||
|
done: make(chan interface{}),
|
||||||
|
stopOnce: sync.Once{},
|
||||||
|
}
|
||||||
|
for _, f := range configFiles {
|
||||||
|
err = w.add(f)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("error adding file %q: %w", f, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return w, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Start start a file watcher, with a copy of the passed context.
|
||||||
|
// calling Start multiple times is a noop
|
||||||
|
func (w *FileWatcher) Start(ctx context.Context) {
|
||||||
|
if w.cancel == nil {
|
||||||
|
cancelCtx, cancel := context.WithCancel(ctx)
|
||||||
|
w.cancel = cancel
|
||||||
|
go w.watch(cancelCtx)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Stop the file watcher
|
||||||
|
// calling Stop multiple times is a noop, Stop must be called after a Start
|
||||||
|
func (w *FileWatcher) Stop() error {
|
||||||
|
var err error
|
||||||
|
w.stopOnce.Do(func() {
|
||||||
|
w.cancel()
|
||||||
|
<-w.done
|
||||||
|
close(w.EventsCh)
|
||||||
|
err = w.watcher.Close()
|
||||||
|
})
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
func (w *FileWatcher) add(filename string) error {
|
||||||
|
if isSymLink(filename) {
|
||||||
|
return fmt.Errorf("symbolic links are not supported %s", filename)
|
||||||
|
}
|
||||||
|
filename = filepath.Clean(filename)
|
||||||
|
w.logger.Trace("adding file", "file", filename)
|
||||||
|
if err := w.watcher.Add(filename); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
modTime, err := w.getFileModifiedTime(filename)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
w.configFiles[filename] = &watchedFile{modTime: modTime}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func isSymLink(filename string) bool {
|
||||||
|
fi, err := os.Lstat(filename)
|
||||||
|
if err != nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
if fi.Mode()&os.ModeSymlink != 0 {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
func (w *FileWatcher) watch(ctx context.Context) {
|
||||||
|
ticker := time.NewTicker(w.reconcileTimeout)
|
||||||
|
defer ticker.Stop()
|
||||||
|
defer close(w.done)
|
||||||
|
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case event, ok := <-w.watcher.Events:
|
||||||
|
if !ok {
|
||||||
|
w.logger.Error("watcher event channel is closed")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
w.logger.Trace("received watcher event", "event", event)
|
||||||
|
if err := w.handleEvent(ctx, event); err != nil {
|
||||||
|
w.logger.Error("error handling watcher event", "error", err, "event", event)
|
||||||
|
}
|
||||||
|
case _, ok := <-w.watcher.Errors:
|
||||||
|
if !ok {
|
||||||
|
w.logger.Error("watcher error channel is closed")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
case <-ticker.C:
|
||||||
|
w.reconcile(ctx)
|
||||||
|
case <-ctx.Done():
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (w *FileWatcher) handleEvent(ctx context.Context, event fsnotify.Event) error {
|
||||||
|
w.logger.Trace("event received ", "filename", event.Name, "OP", event.Op)
|
||||||
|
// we only want Create and Remove events to avoid triggering a reload on file modification
|
||||||
|
if !isCreateEvent(event) && !isRemoveEvent(event) && !isWriteEvent(event) && !isRenameEvent(event) {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
filename := filepath.Clean(event.Name)
|
||||||
|
configFile, basename, ok := w.isWatched(filename)
|
||||||
|
if !ok {
|
||||||
|
return fmt.Errorf("file %s is not watched", event.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
// we only want to update mod time and re-add if the event is on the watched file itself
|
||||||
|
if filename == basename {
|
||||||
|
if isRemoveEvent(event) {
|
||||||
|
// If the file was removed, try to reconcile and see if anything changed.
|
||||||
|
w.logger.Trace("attempt a reconcile ", "filename", event.Name, "OP", event.Op)
|
||||||
|
configFile.modTime = time.Time{}
|
||||||
|
w.reconcile(ctx)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if isCreateEvent(event) || isWriteEvent(event) || isRenameEvent(event) {
|
||||||
|
w.logger.Trace("call the handler", "filename", event.Name, "OP", event.Op)
|
||||||
|
select {
|
||||||
|
case w.EventsCh <- &FileWatcherEvent{Filename: filename}:
|
||||||
|
case <-ctx.Done():
|
||||||
|
return ctx.Err()
|
||||||
|
}
|
||||||
|
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (w *FileWatcher) isWatched(filename string) (*watchedFile, string, bool) {
|
||||||
|
path := filename
|
||||||
|
configFile, ok := w.configFiles[path]
|
||||||
|
if ok {
|
||||||
|
return configFile, path, true
|
||||||
|
}
|
||||||
|
|
||||||
|
stat, err := os.Lstat(filename)
|
||||||
|
|
||||||
|
// if the error is a not exist still try to find if the event for a configured file
|
||||||
|
if os.IsNotExist(err) || (!stat.IsDir() && stat.Mode()&os.ModeSymlink == 0) {
|
||||||
|
w.logger.Trace("not a dir and not a symlink to a dir")
|
||||||
|
// try to see if the watched path is the parent dir
|
||||||
|
newPath := filepath.Dir(path)
|
||||||
|
w.logger.Trace("get dir", "dir", newPath)
|
||||||
|
configFile, ok = w.configFiles[newPath]
|
||||||
|
}
|
||||||
|
return configFile, path, ok
|
||||||
|
}
|
||||||
|
|
||||||
|
func (w *FileWatcher) reconcile(ctx context.Context) {
|
||||||
|
for filename, configFile := range w.configFiles {
|
||||||
|
w.logger.Trace("reconciling", "filename", filename)
|
||||||
|
newModTime, err := w.getFileModifiedTime(filename)
|
||||||
|
if err != nil {
|
||||||
|
w.logger.Error("failed to get file modTime", "file", filename, "err", err)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
err = w.watcher.Add(filename)
|
||||||
|
if err != nil {
|
||||||
|
w.logger.Error("failed to add file to watcher", "file", filename, "err", err)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if !configFile.modTime.Equal(newModTime) {
|
||||||
|
w.logger.Trace("call the handler", "filename", filename, "old modTime", configFile.modTime, "new modTime", newModTime)
|
||||||
|
w.configFiles[filename].modTime = newModTime
|
||||||
|
select {
|
||||||
|
case w.EventsCh <- &FileWatcherEvent{Filename: filename}:
|
||||||
|
case <-ctx.Done():
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func isCreateEvent(event fsnotify.Event) bool {
|
||||||
|
return event.Op&fsnotify.Create == fsnotify.Create
|
||||||
|
}
|
||||||
|
|
||||||
|
func isRemoveEvent(event fsnotify.Event) bool {
|
||||||
|
return event.Op&fsnotify.Remove == fsnotify.Remove
|
||||||
|
}
|
||||||
|
|
||||||
|
func isWriteEvent(event fsnotify.Event) bool {
|
||||||
|
return event.Op&fsnotify.Write == fsnotify.Write
|
||||||
|
}
|
||||||
|
|
||||||
|
func isRenameEvent(event fsnotify.Event) bool {
|
||||||
|
return event.Op&fsnotify.Rename == fsnotify.Rename
|
||||||
|
}
|
||||||
|
|
||||||
|
func (w *FileWatcher) getFileModifiedTime(filename string) (time.Time, error) {
|
||||||
|
fileInfo, err := os.Stat(filename)
|
||||||
|
if err != nil {
|
||||||
|
return time.Time{}, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return fileInfo.ModTime(), err
|
||||||
|
}
|
|
@ -0,0 +1,337 @@
|
||||||
|
package config
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"math/rand"
|
||||||
|
"os"
|
||||||
|
"strings"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/hashicorp/go-hclog"
|
||||||
|
|
||||||
|
"github.com/hashicorp/consul/sdk/testutil"
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
)
|
||||||
|
|
||||||
|
const defaultTimeout = 500 * time.Millisecond
|
||||||
|
|
||||||
|
func TestNewWatcher(t *testing.T) {
|
||||||
|
w, err := NewFileWatcher([]string{}, hclog.New(&hclog.LoggerOptions{}))
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.NotNil(t, w)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestWatcherRenameEvent(t *testing.T) {
|
||||||
|
|
||||||
|
fileTmp := createTempConfigFile(t, "temp_config3")
|
||||||
|
filepaths := []string{createTempConfigFile(t, "temp_config1"), createTempConfigFile(t, "temp_config2")}
|
||||||
|
w, err := NewFileWatcher(filepaths, hclog.New(&hclog.LoggerOptions{}))
|
||||||
|
require.NoError(t, err)
|
||||||
|
w.Start(context.Background())
|
||||||
|
defer func() {
|
||||||
|
_ = w.Stop()
|
||||||
|
}()
|
||||||
|
|
||||||
|
require.NoError(t, err)
|
||||||
|
err = os.Rename(fileTmp, filepaths[0])
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.NoError(t, assertEvent(filepaths[0], w.EventsCh, defaultTimeout))
|
||||||
|
// make sure we consume all events
|
||||||
|
assertEvent(filepaths[0], w.EventsCh, defaultTimeout)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestWatcherAddNotExist(t *testing.T) {
|
||||||
|
|
||||||
|
file := testutil.TempFile(t, "temp_config")
|
||||||
|
filename := file.Name() + randomStr(16)
|
||||||
|
w, err := NewFileWatcher([]string{filename}, hclog.New(&hclog.LoggerOptions{}))
|
||||||
|
require.Error(t, err, "no such file or directory")
|
||||||
|
require.Nil(t, w)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestEventWatcherWrite(t *testing.T) {
|
||||||
|
|
||||||
|
file := testutil.TempFile(t, "temp_config")
|
||||||
|
_, err := file.WriteString("test config")
|
||||||
|
require.NoError(t, err)
|
||||||
|
err = file.Sync()
|
||||||
|
require.NoError(t, err)
|
||||||
|
w, err := NewFileWatcher([]string{file.Name()}, hclog.New(&hclog.LoggerOptions{}))
|
||||||
|
require.NoError(t, err)
|
||||||
|
w.Start(context.Background())
|
||||||
|
defer func() {
|
||||||
|
_ = w.Stop()
|
||||||
|
}()
|
||||||
|
|
||||||
|
_, err = file.WriteString("test config 2")
|
||||||
|
require.NoError(t, err)
|
||||||
|
err = file.Sync()
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.NoError(t, assertEvent(file.Name(), w.EventsCh, defaultTimeout))
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestEventWatcherRead(t *testing.T) {
|
||||||
|
|
||||||
|
filepath := createTempConfigFile(t, "temp_config1")
|
||||||
|
w, err := NewFileWatcher([]string{filepath}, hclog.New(&hclog.LoggerOptions{}))
|
||||||
|
require.NoError(t, err)
|
||||||
|
w.Start(context.Background())
|
||||||
|
defer func() {
|
||||||
|
_ = w.Stop()
|
||||||
|
}()
|
||||||
|
|
||||||
|
_, err = os.ReadFile(filepath)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Error(t, assertEvent(filepath, w.EventsCh, defaultTimeout), "timedout waiting for event")
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestEventWatcherChmod(t *testing.T) {
|
||||||
|
file := testutil.TempFile(t, "temp_config")
|
||||||
|
defer func() {
|
||||||
|
err := file.Close()
|
||||||
|
require.NoError(t, err)
|
||||||
|
}()
|
||||||
|
_, err := file.WriteString("test config")
|
||||||
|
require.NoError(t, err)
|
||||||
|
err = file.Sync()
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
w, err := NewFileWatcher([]string{file.Name()}, hclog.New(&hclog.LoggerOptions{}))
|
||||||
|
require.NoError(t, err)
|
||||||
|
w.Start(context.Background())
|
||||||
|
defer func() {
|
||||||
|
_ = w.Stop()
|
||||||
|
}()
|
||||||
|
|
||||||
|
err = file.Chmod(0777)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Error(t, assertEvent(file.Name(), w.EventsCh, defaultTimeout), "timedout waiting for event")
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestEventWatcherRemoveCreate(t *testing.T) {
|
||||||
|
|
||||||
|
filepath := createTempConfigFile(t, "temp_config1")
|
||||||
|
w, err := NewFileWatcher([]string{filepath}, hclog.New(&hclog.LoggerOptions{}))
|
||||||
|
require.NoError(t, err)
|
||||||
|
w.Start(context.Background())
|
||||||
|
defer func() {
|
||||||
|
_ = w.Stop()
|
||||||
|
}()
|
||||||
|
|
||||||
|
require.NoError(t, err)
|
||||||
|
err = os.Remove(filepath)
|
||||||
|
require.NoError(t, err)
|
||||||
|
recreated, err := os.Create(filepath)
|
||||||
|
require.NoError(t, err)
|
||||||
|
_, err = recreated.WriteString("config 2")
|
||||||
|
require.NoError(t, err)
|
||||||
|
err = recreated.Sync()
|
||||||
|
require.NoError(t, err)
|
||||||
|
// this an event coming from the reconcile loop
|
||||||
|
require.NoError(t, assertEvent(filepath, w.EventsCh, defaultTimeout))
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestEventWatcherMove(t *testing.T) {
|
||||||
|
|
||||||
|
filepath := createTempConfigFile(t, "temp_config1")
|
||||||
|
|
||||||
|
w, err := NewFileWatcher([]string{filepath}, hclog.New(&hclog.LoggerOptions{}))
|
||||||
|
require.NoError(t, err)
|
||||||
|
w.Start(context.Background())
|
||||||
|
defer func() {
|
||||||
|
_ = w.Stop()
|
||||||
|
}()
|
||||||
|
|
||||||
|
for i := 0; i < 10; i++ {
|
||||||
|
filepath2 := createTempConfigFile(t, "temp_config2")
|
||||||
|
err = os.Rename(filepath2, filepath)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.NoError(t, assertEvent(filepath, w.EventsCh, defaultTimeout))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestEventReconcileMove(t *testing.T) {
|
||||||
|
filepath := createTempConfigFile(t, "temp_config1")
|
||||||
|
filepath2 := createTempConfigFile(t, "temp_config2")
|
||||||
|
err := os.Chtimes(filepath, time.Now(), time.Now().Add(-1*time.Second))
|
||||||
|
require.NoError(t, err)
|
||||||
|
w, err := NewFileWatcher([]string{filepath}, hclog.New(&hclog.LoggerOptions{}))
|
||||||
|
require.NoError(t, err)
|
||||||
|
w.Start(context.Background())
|
||||||
|
defer func() {
|
||||||
|
_ = w.Stop()
|
||||||
|
}()
|
||||||
|
|
||||||
|
// remove the file from the internal watcher to only trigger the reconcile
|
||||||
|
err = w.watcher.Remove(filepath)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
err = os.Rename(filepath2, filepath)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.NoError(t, assertEvent(filepath, w.EventsCh, 2000*time.Millisecond))
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestEventWatcherDirCreateRemove(t *testing.T) {
|
||||||
|
filepath := testutil.TempDir(t, "temp_config1")
|
||||||
|
w, err := NewFileWatcher([]string{filepath}, hclog.New(&hclog.LoggerOptions{}))
|
||||||
|
require.NoError(t, err)
|
||||||
|
w.Start(context.Background())
|
||||||
|
defer func() {
|
||||||
|
_ = w.Stop()
|
||||||
|
}()
|
||||||
|
for i := 0; i < 1; i++ {
|
||||||
|
name := filepath + "/" + randomStr(20)
|
||||||
|
file, err := os.Create(name)
|
||||||
|
require.NoError(t, err)
|
||||||
|
err = file.Close()
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.NoError(t, assertEvent(filepath, w.EventsCh, defaultTimeout))
|
||||||
|
|
||||||
|
err = os.Remove(name)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.NoError(t, assertEvent(filepath, w.EventsCh, defaultTimeout))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestEventWatcherDirMove(t *testing.T) {
|
||||||
|
filepath := testutil.TempDir(t, "temp_config1")
|
||||||
|
|
||||||
|
name := filepath + "/" + randomStr(20)
|
||||||
|
file, err := os.Create(name)
|
||||||
|
require.NoError(t, err)
|
||||||
|
err = file.Close()
|
||||||
|
require.NoError(t, err)
|
||||||
|
w, err := NewFileWatcher([]string{filepath}, hclog.New(&hclog.LoggerOptions{}))
|
||||||
|
require.NoError(t, err)
|
||||||
|
w.Start(context.Background())
|
||||||
|
defer func() {
|
||||||
|
_ = w.Stop()
|
||||||
|
}()
|
||||||
|
|
||||||
|
for i := 0; i < 100; i++ {
|
||||||
|
filepathTmp := createTempConfigFile(t, "temp_config2")
|
||||||
|
os.Rename(filepathTmp, name)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.NoError(t, assertEvent(filepath, w.EventsCh, defaultTimeout))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestEventWatcherDirMoveTrim(t *testing.T) {
|
||||||
|
filepath := testutil.TempDir(t, "temp_config1")
|
||||||
|
|
||||||
|
name := filepath + "/" + randomStr(20)
|
||||||
|
file, err := os.Create(name)
|
||||||
|
require.NoError(t, err)
|
||||||
|
err = file.Close()
|
||||||
|
require.NoError(t, err)
|
||||||
|
w, err := NewFileWatcher([]string{filepath + "/"}, hclog.New(&hclog.LoggerOptions{}))
|
||||||
|
require.NoError(t, err)
|
||||||
|
w.Start(context.Background())
|
||||||
|
defer func() {
|
||||||
|
_ = w.Stop()
|
||||||
|
}()
|
||||||
|
|
||||||
|
for i := 0; i < 100; i++ {
|
||||||
|
filepathTmp := createTempConfigFile(t, "temp_config2")
|
||||||
|
os.Rename(filepathTmp, name)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.NoError(t, assertEvent(filepath, w.EventsCh, defaultTimeout))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Consul do not support configuration in sub-directories
|
||||||
|
func TestEventWatcherSubDirMove(t *testing.T) {
|
||||||
|
filepath := testutil.TempDir(t, "temp_config1")
|
||||||
|
err := os.Mkdir(filepath+"/temp", 0777)
|
||||||
|
require.NoError(t, err)
|
||||||
|
name := filepath + "/temp/" + randomStr(20)
|
||||||
|
file, err := os.Create(name)
|
||||||
|
require.NoError(t, err)
|
||||||
|
err = file.Close()
|
||||||
|
require.NoError(t, err)
|
||||||
|
w, err := NewFileWatcher([]string{filepath}, hclog.New(&hclog.LoggerOptions{}))
|
||||||
|
require.NoError(t, err)
|
||||||
|
w.Start(context.Background())
|
||||||
|
defer func() {
|
||||||
|
_ = w.Stop()
|
||||||
|
}()
|
||||||
|
|
||||||
|
for i := 0; i < 2; i++ {
|
||||||
|
filepathTmp := createTempConfigFile(t, "temp_config2")
|
||||||
|
os.Rename(filepathTmp, name)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Error(t, assertEvent(filepath, w.EventsCh, defaultTimeout), "timedout waiting for event")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestEventWatcherDirRead(t *testing.T) {
|
||||||
|
filepath := testutil.TempDir(t, "temp_config1")
|
||||||
|
|
||||||
|
name := filepath + "/" + randomStr(20)
|
||||||
|
file, err := os.Create(name)
|
||||||
|
require.NoError(t, err)
|
||||||
|
err = file.Close()
|
||||||
|
require.NoError(t, err)
|
||||||
|
w, err := NewFileWatcher([]string{filepath}, hclog.New(&hclog.LoggerOptions{}))
|
||||||
|
require.NoError(t, err)
|
||||||
|
w.Start(context.Background())
|
||||||
|
t.Cleanup(func() {
|
||||||
|
_ = w.Stop()
|
||||||
|
})
|
||||||
|
|
||||||
|
_, err = os.ReadFile(name)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Error(t, assertEvent(filepath, w.EventsCh, defaultTimeout), "timedout waiting for event")
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestEventWatcherMoveSoftLink(t *testing.T) {
|
||||||
|
|
||||||
|
filepath := createTempConfigFile(t, "temp_config1")
|
||||||
|
tempDir := testutil.TempDir(t, "temp_dir")
|
||||||
|
name := tempDir + "/" + randomStr(20)
|
||||||
|
err := os.Symlink(filepath, name)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
w, err := NewFileWatcher([]string{name}, hclog.New(&hclog.LoggerOptions{}))
|
||||||
|
require.Error(t, err, "symbolic link are not supported")
|
||||||
|
require.Nil(t, w)
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
func assertEvent(name string, watcherCh chan *FileWatcherEvent, timeout time.Duration) error {
|
||||||
|
select {
|
||||||
|
case ev := <-watcherCh:
|
||||||
|
if ev.Filename != name && !strings.Contains(ev.Filename, name) {
|
||||||
|
return fmt.Errorf("filename do not match %s %s", ev.Filename, name)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
case <-time.After(timeout):
|
||||||
|
return fmt.Errorf("timedout waiting for event")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func createTempConfigFile(t *testing.T, filename string) string {
|
||||||
|
file := testutil.TempFile(t, filename)
|
||||||
|
|
||||||
|
_, err1 := file.WriteString("test config")
|
||||||
|
err2 := file.Close()
|
||||||
|
|
||||||
|
require.NoError(t, err1)
|
||||||
|
require.NoError(t, err2)
|
||||||
|
|
||||||
|
return file.Name()
|
||||||
|
}
|
||||||
|
|
||||||
|
func randomStr(length int) string {
|
||||||
|
const charset = "abcdefghijklmnopqrstuvwxyz" +
|
||||||
|
"ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"
|
||||||
|
var seededRand *rand.Rand = rand.New(
|
||||||
|
rand.NewSource(time.Now().UnixNano()))
|
||||||
|
b := make([]byte, length)
|
||||||
|
for i := range b {
|
||||||
|
b[i] = charset[seededRand.Intn(len(charset))]
|
||||||
|
}
|
||||||
|
return string(b)
|
||||||
|
}
|
|
@ -0,0 +1,34 @@
|
||||||
|
package configentry
|
||||||
|
|
||||||
|
import (
|
||||||
|
"github.com/hashicorp/consul/agent/structs"
|
||||||
|
)
|
||||||
|
|
||||||
|
// KindName is a value type useful for maps. You can use:
|
||||||
|
// map[KindName]Payload
|
||||||
|
// instead of:
|
||||||
|
// map[string]map[string]Payload
|
||||||
|
type KindName struct {
|
||||||
|
Kind string
|
||||||
|
Name string
|
||||||
|
structs.EnterpriseMeta
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewKindName returns a new KindName. The EnterpriseMeta values will be
|
||||||
|
// normalized based on the kind.
|
||||||
|
//
|
||||||
|
// Any caller which modifies the EnterpriseMeta field must call Normalize
|
||||||
|
// before persisting or using the value as a map key.
|
||||||
|
func NewKindName(kind, name string, entMeta *structs.EnterpriseMeta) KindName {
|
||||||
|
ret := KindName{
|
||||||
|
Kind: kind,
|
||||||
|
Name: name,
|
||||||
|
}
|
||||||
|
if entMeta == nil {
|
||||||
|
entMeta = structs.DefaultEnterpriseMetaInDefaultPartition()
|
||||||
|
}
|
||||||
|
|
||||||
|
ret.EnterpriseMeta = *entMeta
|
||||||
|
ret.Normalize()
|
||||||
|
return ret
|
||||||
|
}
|
|
@ -0,0 +1,149 @@
|
||||||
|
package configentry
|
||||||
|
|
||||||
|
import (
|
||||||
|
"github.com/hashicorp/consul/agent/structs"
|
||||||
|
)
|
||||||
|
|
||||||
|
// DiscoveryChainSet is a wrapped set of raw cross-referenced config entries
|
||||||
|
// necessary for the DiscoveryChain.Get RPC process.
|
||||||
|
//
|
||||||
|
// None of these are defaulted.
|
||||||
|
type DiscoveryChainSet struct {
|
||||||
|
Routers map[structs.ServiceID]*structs.ServiceRouterConfigEntry
|
||||||
|
Splitters map[structs.ServiceID]*structs.ServiceSplitterConfigEntry
|
||||||
|
Resolvers map[structs.ServiceID]*structs.ServiceResolverConfigEntry
|
||||||
|
Services map[structs.ServiceID]*structs.ServiceConfigEntry
|
||||||
|
ProxyDefaults map[string]*structs.ProxyConfigEntry
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewDiscoveryChainSet() *DiscoveryChainSet {
|
||||||
|
return &DiscoveryChainSet{
|
||||||
|
Routers: make(map[structs.ServiceID]*structs.ServiceRouterConfigEntry),
|
||||||
|
Splitters: make(map[structs.ServiceID]*structs.ServiceSplitterConfigEntry),
|
||||||
|
Resolvers: make(map[structs.ServiceID]*structs.ServiceResolverConfigEntry),
|
||||||
|
Services: make(map[structs.ServiceID]*structs.ServiceConfigEntry),
|
||||||
|
ProxyDefaults: make(map[string]*structs.ProxyConfigEntry),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *DiscoveryChainSet) GetRouter(sid structs.ServiceID) *structs.ServiceRouterConfigEntry {
|
||||||
|
if e.Routers != nil {
|
||||||
|
return e.Routers[sid]
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *DiscoveryChainSet) GetSplitter(sid structs.ServiceID) *structs.ServiceSplitterConfigEntry {
|
||||||
|
if e.Splitters != nil {
|
||||||
|
return e.Splitters[sid]
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *DiscoveryChainSet) GetResolver(sid structs.ServiceID) *structs.ServiceResolverConfigEntry {
|
||||||
|
if e.Resolvers != nil {
|
||||||
|
return e.Resolvers[sid]
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *DiscoveryChainSet) GetService(sid structs.ServiceID) *structs.ServiceConfigEntry {
|
||||||
|
if e.Services != nil {
|
||||||
|
return e.Services[sid]
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *DiscoveryChainSet) GetProxyDefaults(partition string) *structs.ProxyConfigEntry {
|
||||||
|
if e.ProxyDefaults != nil {
|
||||||
|
return e.ProxyDefaults[partition]
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddRouters adds router configs. Convenience function for testing.
|
||||||
|
func (e *DiscoveryChainSet) AddRouters(entries ...*structs.ServiceRouterConfigEntry) {
|
||||||
|
if e.Routers == nil {
|
||||||
|
e.Routers = make(map[structs.ServiceID]*structs.ServiceRouterConfigEntry)
|
||||||
|
}
|
||||||
|
for _, entry := range entries {
|
||||||
|
e.Routers[structs.NewServiceID(entry.Name, &entry.EnterpriseMeta)] = entry
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddSplitters adds splitter configs. Convenience function for testing.
|
||||||
|
func (e *DiscoveryChainSet) AddSplitters(entries ...*structs.ServiceSplitterConfigEntry) {
|
||||||
|
if e.Splitters == nil {
|
||||||
|
e.Splitters = make(map[structs.ServiceID]*structs.ServiceSplitterConfigEntry)
|
||||||
|
}
|
||||||
|
for _, entry := range entries {
|
||||||
|
e.Splitters[structs.NewServiceID(entry.Name, entry.GetEnterpriseMeta())] = entry
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddResolvers adds resolver configs. Convenience function for testing.
|
||||||
|
func (e *DiscoveryChainSet) AddResolvers(entries ...*structs.ServiceResolverConfigEntry) {
|
||||||
|
if e.Resolvers == nil {
|
||||||
|
e.Resolvers = make(map[structs.ServiceID]*structs.ServiceResolverConfigEntry)
|
||||||
|
}
|
||||||
|
for _, entry := range entries {
|
||||||
|
e.Resolvers[structs.NewServiceID(entry.Name, entry.GetEnterpriseMeta())] = entry
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddServices adds service configs. Convenience function for testing.
|
||||||
|
func (e *DiscoveryChainSet) AddServices(entries ...*structs.ServiceConfigEntry) {
|
||||||
|
if e.Services == nil {
|
||||||
|
e.Services = make(map[structs.ServiceID]*structs.ServiceConfigEntry)
|
||||||
|
}
|
||||||
|
for _, entry := range entries {
|
||||||
|
e.Services[structs.NewServiceID(entry.Name, entry.GetEnterpriseMeta())] = entry
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddProxyDefaults adds proxy-defaults configs. Convenience function for testing.
|
||||||
|
func (e *DiscoveryChainSet) AddProxyDefaults(entries ...*structs.ProxyConfigEntry) {
|
||||||
|
if e.ProxyDefaults == nil {
|
||||||
|
e.ProxyDefaults = make(map[string]*structs.ProxyConfigEntry)
|
||||||
|
}
|
||||||
|
for _, entry := range entries {
|
||||||
|
e.ProxyDefaults[entry.PartitionOrDefault()] = entry
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddEntries adds generic configs. Convenience function for testing. Panics on
|
||||||
|
// operator error.
|
||||||
|
func (e *DiscoveryChainSet) AddEntries(entries ...structs.ConfigEntry) {
|
||||||
|
for _, entry := range entries {
|
||||||
|
switch entry.GetKind() {
|
||||||
|
case structs.ServiceRouter:
|
||||||
|
e.AddRouters(entry.(*structs.ServiceRouterConfigEntry))
|
||||||
|
case structs.ServiceSplitter:
|
||||||
|
e.AddSplitters(entry.(*structs.ServiceSplitterConfigEntry))
|
||||||
|
case structs.ServiceResolver:
|
||||||
|
e.AddResolvers(entry.(*structs.ServiceResolverConfigEntry))
|
||||||
|
case structs.ServiceDefaults:
|
||||||
|
e.AddServices(entry.(*structs.ServiceConfigEntry))
|
||||||
|
case structs.ProxyDefaults:
|
||||||
|
if entry.GetName() != structs.ProxyConfigGlobal {
|
||||||
|
panic("the only supported proxy-defaults name is '" + structs.ProxyConfigGlobal + "'")
|
||||||
|
}
|
||||||
|
e.AddProxyDefaults(entry.(*structs.ProxyConfigEntry))
|
||||||
|
default:
|
||||||
|
panic("unhandled config entry kind: " + entry.GetKind())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsEmpty returns true if there are no config entries at all in the response.
|
||||||
|
// You should prefer this over IsChainEmpty() in most cases.
|
||||||
|
func (e *DiscoveryChainSet) IsEmpty() bool {
|
||||||
|
return e.IsChainEmpty() && len(e.Services) == 0 && len(e.ProxyDefaults) == 0
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsChainEmpty returns true if there are no service-routers,
|
||||||
|
// service-splitters, or service-resolvers that are present. These config
|
||||||
|
// entries are the primary parts of the discovery chain.
|
||||||
|
func (e *DiscoveryChainSet) IsChainEmpty() bool {
|
||||||
|
return len(e.Routers) == 0 && len(e.Splitters) == 0 && len(e.Resolvers) == 0
|
||||||
|
}
|
|
@ -0,0 +1,5 @@
|
||||||
|
// Package configentry contains structs and logic related to the Configuration
|
||||||
|
// Entry subsystem. Currently this is restricted to structs used during
|
||||||
|
// runtime, but which are not serialized to the network or disk.
|
||||||
|
|
||||||
|
package configentry
|
|
@ -9,11 +9,7 @@ import (
|
||||||
"github.com/hashicorp/consul/agent/connect"
|
"github.com/hashicorp/consul/agent/connect"
|
||||||
)
|
)
|
||||||
|
|
||||||
func validateSetIntermediate(
|
func validateSetIntermediate(intermediatePEM, rootPEM string, spiffeID *connect.SpiffeIDSigning) error {
|
||||||
intermediatePEM, rootPEM string,
|
|
||||||
currentPrivateKey string, // optional
|
|
||||||
spiffeID *connect.SpiffeIDSigning,
|
|
||||||
) error {
|
|
||||||
// Get the key from the incoming intermediate cert so we can compare it
|
// Get the key from the incoming intermediate cert so we can compare it
|
||||||
// to the currently stored key.
|
// to the currently stored key.
|
||||||
intermediate, err := connect.ParseCert(intermediatePEM)
|
intermediate, err := connect.ParseCert(intermediatePEM)
|
||||||
|
@ -21,26 +17,6 @@ func validateSetIntermediate(
|
||||||
return fmt.Errorf("error parsing intermediate PEM: %v", err)
|
return fmt.Errorf("error parsing intermediate PEM: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
if currentPrivateKey != "" {
|
|
||||||
privKey, err := connect.ParseSigner(currentPrivateKey)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Compare the two keys to make sure they match.
|
|
||||||
b1, err := x509.MarshalPKIXPublicKey(intermediate.PublicKey)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
b2, err := x509.MarshalPKIXPublicKey(privKey.Public())
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if !bytes.Equal(b1, b2) {
|
|
||||||
return fmt.Errorf("intermediate cert is for a different private key")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate the remaining fields and make sure the intermediate validates against
|
// Validate the remaining fields and make sure the intermediate validates against
|
||||||
// the given root cert.
|
// the given root cert.
|
||||||
if !intermediate.IsCA {
|
if !intermediate.IsCA {
|
||||||
|
@ -65,6 +41,32 @@ func validateSetIntermediate(
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func validateIntermediateSignedByPrivateKey(intermediatePEM string, privateKey string) error {
|
||||||
|
intermediate, err := connect.ParseCert(intermediatePEM)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("error parsing intermediate PEM: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
privKey, err := connect.ParseSigner(privateKey)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Compare the two keys to make sure they match.
|
||||||
|
b1, err := x509.MarshalPKIXPublicKey(intermediate.PublicKey)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
b2, err := x509.MarshalPKIXPublicKey(privKey.Public())
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if !bytes.Equal(b1, b2) {
|
||||||
|
return fmt.Errorf("intermediate cert is for a different private key")
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
func validateSignIntermediate(csr *x509.CertificateRequest, spiffeID *connect.SpiffeIDSigning) error {
|
func validateSignIntermediate(csr *x509.CertificateRequest, spiffeID *connect.SpiffeIDSigning) error {
|
||||||
// We explicitly _don't_ require that the CSR has a valid SPIFFE signing URI
|
// We explicitly _don't_ require that the CSR has a valid SPIFFE signing URI
|
||||||
// SAN because AWS PCA doesn't let us set one :(. We need to relax it here
|
// SAN because AWS PCA doesn't let us set one :(. We need to relax it here
|
||||||
|
|
|
@ -135,6 +135,7 @@ type PrimaryProvider interface {
|
||||||
// the active intermediate. If multiple intermediates are needed to complete
|
// the active intermediate. If multiple intermediates are needed to complete
|
||||||
// the chain from the signing certificate back to the active root, they should
|
// the chain from the signing certificate back to the active root, they should
|
||||||
// all by bundled here.
|
// all by bundled here.
|
||||||
|
// TODO: replace with GenerateLeafSigningCert (https://github.com/hashicorp/consul/issues/12386)
|
||||||
GenerateIntermediate() (string, error)
|
GenerateIntermediate() (string, error)
|
||||||
|
|
||||||
// SignIntermediate will validate the CSR to ensure the trust domain in the
|
// SignIntermediate will validate the CSR to ensure the trust domain in the
|
||||||
|
@ -171,14 +172,20 @@ type PrimaryProvider interface {
|
||||||
}
|
}
|
||||||
|
|
||||||
type SecondaryProvider interface {
|
type SecondaryProvider interface {
|
||||||
// GenerateIntermediateCSR generates a CSR for an intermediate CA
|
// GenerateIntermediateCSR should return a CSR for an intermediate CA
|
||||||
// certificate, to be signed by the root of another datacenter. If IsPrimary was
|
// certificate. The intermediate CA will be signed by the primary CA and
|
||||||
// set to true with Configure(), calling this is an error.
|
// should be used by the provider to sign leaf certificates in the local
|
||||||
|
// datacenter.
|
||||||
|
//
|
||||||
|
// After the certificate is signed, SecondaryProvider.SetIntermediate will
|
||||||
|
// be called to store the intermediate CA.
|
||||||
GenerateIntermediateCSR() (string, error)
|
GenerateIntermediateCSR() (string, error)
|
||||||
|
|
||||||
// SetIntermediate sets the provider to use the given intermediate certificate
|
// SetIntermediate is called to store a newly signed leaf signing certificate and
|
||||||
// as well as the root it was signed by. This completes the initialization for
|
// the chain of certificates back to the root CA certificate.
|
||||||
// a provider where IsPrimary was set to false in Configure().
|
//
|
||||||
|
// The provider should save the certificates and use them to
|
||||||
|
// Provider.Sign leaf certificates.
|
||||||
SetIntermediate(intermediatePEM, rootPEM string) error
|
SetIntermediate(intermediatePEM, rootPEM string) error
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -186,7 +193,12 @@ type SecondaryProvider interface {
|
||||||
//
|
//
|
||||||
// TODO: rename this struct
|
// TODO: rename this struct
|
||||||
type RootResult struct {
|
type RootResult struct {
|
||||||
// PEM encoded certificate that will be used as the primary CA.
|
// PEM encoded bundle of CA certificates. The first certificate must be the
|
||||||
|
// primary CA used to sign intermediates for secondary datacenters, and the
|
||||||
|
// last certificate must be the trusted CA.
|
||||||
|
//
|
||||||
|
// If there is only a single certificate in the bundle then it will be used
|
||||||
|
// as both the primary CA and the trusted CA.
|
||||||
PEM string
|
PEM string
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -253,12 +253,10 @@ func (c *ConsulProvider) SetIntermediate(intermediatePEM, rootPEM string) error
|
||||||
return fmt.Errorf("cannot set an intermediate using another root in the primary datacenter")
|
return fmt.Errorf("cannot set an intermediate using another root in the primary datacenter")
|
||||||
}
|
}
|
||||||
|
|
||||||
err = validateSetIntermediate(
|
if err = validateSetIntermediate(intermediatePEM, rootPEM, c.spiffeID); err != nil {
|
||||||
intermediatePEM, rootPEM,
|
return err
|
||||||
providerState.PrivateKey,
|
}
|
||||||
c.spiffeID,
|
if err := validateIntermediateSignedByPrivateKey(intermediatePEM, providerState.PrivateKey); err != nil {
|
||||||
)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -279,7 +279,7 @@ func (v *VaultProvider) GenerateRoot() (RootResult, error) {
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return RootResult{}, err
|
return RootResult{}, err
|
||||||
}
|
}
|
||||||
_, err = v.client.Logical().Write(v.config.RootPKIPath+"root/generate/internal", map[string]interface{}{
|
resp, err := v.client.Logical().Write(v.config.RootPKIPath+"root/generate/internal", map[string]interface{}{
|
||||||
"common_name": connect.CACN("vault", uid, v.clusterID, v.isPrimary),
|
"common_name": connect.CACN("vault", uid, v.clusterID, v.isPrimary),
|
||||||
"uri_sans": v.spiffeID.URI().String(),
|
"uri_sans": v.spiffeID.URI().String(),
|
||||||
"key_type": v.config.PrivateKeyType,
|
"key_type": v.config.PrivateKeyType,
|
||||||
|
@ -288,12 +288,10 @@ func (v *VaultProvider) GenerateRoot() (RootResult, error) {
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return RootResult{}, err
|
return RootResult{}, err
|
||||||
}
|
}
|
||||||
|
var ok bool
|
||||||
// retrieve the newly generated cert so that we can return it
|
rootPEM, ok = resp.Data["certificate"].(string)
|
||||||
// TODO: is this already available from the Local().Write() above?
|
if !ok {
|
||||||
rootPEM, err = v.getCA(v.config.RootPKIPath)
|
return RootResult{}, fmt.Errorf("unexpected response from Vault: %v", resp.Data["certificate"])
|
||||||
if err != nil {
|
|
||||||
return RootResult{}, err
|
|
||||||
}
|
}
|
||||||
|
|
||||||
default:
|
default:
|
||||||
|
@ -302,7 +300,18 @@ func (v *VaultProvider) GenerateRoot() (RootResult, error) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return RootResult{PEM: rootPEM}, nil
|
rootChain, err := v.getCAChain(v.config.RootPKIPath)
|
||||||
|
if err != nil {
|
||||||
|
return RootResult{}, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Workaround for a bug in the Vault PKI API.
|
||||||
|
// See https://github.com/hashicorp/vault/issues/13489
|
||||||
|
if rootChain == "" {
|
||||||
|
rootChain = rootPEM
|
||||||
|
}
|
||||||
|
|
||||||
|
return RootResult{PEM: rootChain}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// GenerateIntermediateCSR creates a private key and generates a CSR
|
// GenerateIntermediateCSR creates a private key and generates a CSR
|
||||||
|
@ -402,8 +411,7 @@ func (v *VaultProvider) SetIntermediate(intermediatePEM, rootPEM string) error {
|
||||||
return fmt.Errorf("cannot set an intermediate using another root in the primary datacenter")
|
return fmt.Errorf("cannot set an intermediate using another root in the primary datacenter")
|
||||||
}
|
}
|
||||||
|
|
||||||
// the private key is in vault, so we can't use it in this validation
|
err := validateSetIntermediate(intermediatePEM, rootPEM, v.spiffeID)
|
||||||
err := validateSetIntermediate(intermediatePEM, rootPEM, "", v.spiffeID)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -468,6 +476,29 @@ func (v *VaultProvider) getCA(path string) (string, error) {
|
||||||
return root, nil
|
return root, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// TODO: refactor to remove duplication with getCA
|
||||||
|
func (v *VaultProvider) getCAChain(path string) (string, error) {
|
||||||
|
req := v.client.NewRequest("GET", "/v1/"+path+"/ca_chain")
|
||||||
|
resp, err := v.client.RawRequest(req)
|
||||||
|
if resp != nil {
|
||||||
|
defer resp.Body.Close()
|
||||||
|
}
|
||||||
|
if resp != nil && resp.StatusCode == http.StatusNotFound {
|
||||||
|
return "", ErrBackendNotMounted
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
|
||||||
|
raw, err := ioutil.ReadAll(resp.Body)
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
|
||||||
|
root := EnsureTrailingNewline(string(raw))
|
||||||
|
return root, nil
|
||||||
|
}
|
||||||
|
|
||||||
// GenerateIntermediate mounts the configured intermediate PKI backend if
|
// GenerateIntermediate mounts the configured intermediate PKI backend if
|
||||||
// necessary, then generates and signs a new CA CSR using the root PKI backend
|
// necessary, then generates and signs a new CA CSR using the root PKI backend
|
||||||
// and updates the intermediate backend to use that new certificate.
|
// and updates the intermediate backend to use that new certificate.
|
||||||
|
@ -529,12 +560,7 @@ func (v *VaultProvider) Sign(csr *x509.CertificateRequest) (string, error) {
|
||||||
if !ok {
|
if !ok {
|
||||||
return "", fmt.Errorf("certificate was not a string")
|
return "", fmt.Errorf("certificate was not a string")
|
||||||
}
|
}
|
||||||
ca, ok := response.Data["issuing_ca"].(string)
|
return EnsureTrailingNewline(cert), nil
|
||||||
if !ok {
|
|
||||||
return "", fmt.Errorf("issuing_ca was not a string")
|
|
||||||
}
|
|
||||||
|
|
||||||
return EnsureTrailingNewline(cert) + EnsureTrailingNewline(ca), nil
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// SignIntermediate returns a signed CA certificate with a path length constraint
|
// SignIntermediate returns a signed CA certificate with a path length constraint
|
||||||
|
|
|
@ -20,11 +20,11 @@ const (
|
||||||
DefaultIntermediateCertTTL = 24 * 365 * time.Hour
|
DefaultIntermediateCertTTL = 24 * 365 * time.Hour
|
||||||
)
|
)
|
||||||
|
|
||||||
func pemEncodeKey(key []byte, blockType string) (string, error) {
|
func pemEncode(value []byte, blockType string) (string, error) {
|
||||||
var buf bytes.Buffer
|
var buf bytes.Buffer
|
||||||
|
|
||||||
if err := pem.Encode(&buf, &pem.Block{Type: blockType, Bytes: key}); err != nil {
|
if err := pem.Encode(&buf, &pem.Block{Type: blockType, Bytes: value}); err != nil {
|
||||||
return "", fmt.Errorf("error encoding private key: %s", err)
|
return "", fmt.Errorf("error encoding value %v: %s", blockType, err)
|
||||||
}
|
}
|
||||||
return buf.String(), nil
|
return buf.String(), nil
|
||||||
}
|
}
|
||||||
|
@ -38,7 +38,7 @@ func generateRSAKey(keyBits int) (crypto.Signer, string, error) {
|
||||||
}
|
}
|
||||||
|
|
||||||
bs := x509.MarshalPKCS1PrivateKey(pk)
|
bs := x509.MarshalPKCS1PrivateKey(pk)
|
||||||
pemBlock, err := pemEncodeKey(bs, "RSA PRIVATE KEY")
|
pemBlock, err := pemEncode(bs, "RSA PRIVATE KEY")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, "", err
|
return nil, "", err
|
||||||
}
|
}
|
||||||
|
@ -73,7 +73,7 @@ func generateECDSAKey(keyBits int) (crypto.Signer, string, error) {
|
||||||
return nil, "", fmt.Errorf("error marshaling ECDSA private key: %s", err)
|
return nil, "", fmt.Errorf("error marshaling ECDSA private key: %s", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
pemBlock, err := pemEncodeKey(bs, "EC PRIVATE KEY")
|
pemBlock, err := pemEncode(bs, "EC PRIVATE KEY")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, "", err
|
return nil, "", err
|
||||||
}
|
}
|
||||||
|
|
|
@ -56,6 +56,21 @@ func ParseLeafCerts(pemValue string) (*x509.Certificate, *x509.CertPool, error)
|
||||||
return leaf, intermediates, nil
|
return leaf, intermediates, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// CertSubjects can be used in debugging to return the subject of each
|
||||||
|
// certificate in the PEM bundle. Each subject is separated by a newline.
|
||||||
|
func CertSubjects(pem string) string {
|
||||||
|
certs, err := parseCerts(pem)
|
||||||
|
if err != nil {
|
||||||
|
return err.Error()
|
||||||
|
}
|
||||||
|
var buf strings.Builder
|
||||||
|
for _, cert := range certs {
|
||||||
|
buf.WriteString(cert.Subject.String())
|
||||||
|
buf.WriteString("\n")
|
||||||
|
}
|
||||||
|
return buf.String()
|
||||||
|
}
|
||||||
|
|
||||||
// ParseCerts parses the all x509 certificates from a PEM-encoded value.
|
// ParseCerts parses the all x509 certificates from a PEM-encoded value.
|
||||||
// The first returned cert is a leaf cert and any other ones are intermediates.
|
// The first returned cert is a leaf cert and any other ones are intermediates.
|
||||||
//
|
//
|
||||||
|
@ -90,21 +105,10 @@ func parseCerts(pemValue string) ([]*x509.Certificate, error) {
|
||||||
return out, nil
|
return out, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// CalculateCertFingerprint parses the x509 certificate from a PEM-encoded value
|
// CalculateCertFingerprint calculates the SHA-1 fingerprint from the cert bytes.
|
||||||
// and calculates the SHA-1 fingerprint.
|
func CalculateCertFingerprint(cert []byte) string {
|
||||||
func CalculateCertFingerprint(pemValue string) (string, error) {
|
hash := sha1.Sum(cert)
|
||||||
// The _ result below is not an error but the remaining PEM bytes.
|
return HexString(hash[:])
|
||||||
block, _ := pem.Decode([]byte(pemValue))
|
|
||||||
if block == nil {
|
|
||||||
return "", fmt.Errorf("no PEM-encoded data found")
|
|
||||||
}
|
|
||||||
|
|
||||||
if block.Type != "CERTIFICATE" {
|
|
||||||
return "", fmt.Errorf("first PEM-block should be CERTIFICATE type")
|
|
||||||
}
|
|
||||||
|
|
||||||
hash := sha1.Sum(block.Bytes)
|
|
||||||
return HexString(hash[:]), nil
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// ParseSigner parses a crypto.Signer from a PEM-encoded key. The private key
|
// ParseSigner parses a crypto.Signer from a PEM-encoded key. The private key
|
||||||
|
|
|
@ -112,10 +112,7 @@ func testCA(t testing.T, xc *structs.CARoot, keyType string, keyBits int, ttl ti
|
||||||
t.Fatalf("error encoding private key: %s", err)
|
t.Fatalf("error encoding private key: %s", err)
|
||||||
}
|
}
|
||||||
result.RootCert = buf.String()
|
result.RootCert = buf.String()
|
||||||
result.ID, err = CalculateCertFingerprint(result.RootCert)
|
result.ID = CalculateCertFingerprint(bs)
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("error generating CA ID fingerprint: %s", err)
|
|
||||||
}
|
|
||||||
result.SerialNumber = uint64(sn.Int64())
|
result.SerialNumber = uint64(sn.Int64())
|
||||||
result.NotBefore = template.NotBefore.UTC()
|
result.NotBefore = template.NotBefore.UTC()
|
||||||
result.NotAfter = template.NotAfter.UTC()
|
result.NotAfter = template.NotAfter.UTC()
|
||||||
|
|
|
@ -322,6 +322,9 @@ func (a *ACL) TokenRead(args *structs.ACLTokenGetRequest, reply *structs.ACLToke
|
||||||
|
|
||||||
reply.Index, reply.Token = index, token
|
reply.Index, reply.Token = index, token
|
||||||
reply.SourceDatacenter = args.Datacenter
|
reply.SourceDatacenter = args.Datacenter
|
||||||
|
if token == nil {
|
||||||
|
return errNotFound
|
||||||
|
}
|
||||||
return nil
|
return nil
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
@ -1045,6 +1048,9 @@ func (a *ACL) PolicyRead(args *structs.ACLPolicyGetRequest, reply *structs.ACLPo
|
||||||
}
|
}
|
||||||
|
|
||||||
reply.Index, reply.Policy = index, policy
|
reply.Index, reply.Policy = index, policy
|
||||||
|
if policy == nil {
|
||||||
|
return errNotFound
|
||||||
|
}
|
||||||
return nil
|
return nil
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
@ -1428,6 +1434,9 @@ func (a *ACL) RoleRead(args *structs.ACLRoleGetRequest, reply *structs.ACLRoleRe
|
||||||
}
|
}
|
||||||
|
|
||||||
reply.Index, reply.Role = index, role
|
reply.Index, reply.Role = index, role
|
||||||
|
if role == nil {
|
||||||
|
return errNotFound
|
||||||
|
}
|
||||||
return nil
|
return nil
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
@ -1795,12 +1804,14 @@ func (a *ACL) BindingRuleRead(args *structs.ACLBindingRuleGetRequest, reply *str
|
||||||
return a.srv.blockingQuery(&args.QueryOptions, &reply.QueryMeta,
|
return a.srv.blockingQuery(&args.QueryOptions, &reply.QueryMeta,
|
||||||
func(ws memdb.WatchSet, state *state.Store) error {
|
func(ws memdb.WatchSet, state *state.Store) error {
|
||||||
index, rule, err := state.ACLBindingRuleGetByID(ws, args.BindingRuleID, &args.EnterpriseMeta)
|
index, rule, err := state.ACLBindingRuleGetByID(ws, args.BindingRuleID, &args.EnterpriseMeta)
|
||||||
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
reply.Index, reply.BindingRule = index, rule
|
reply.Index, reply.BindingRule = index, rule
|
||||||
|
if rule == nil {
|
||||||
|
return errNotFound
|
||||||
|
}
|
||||||
return nil
|
return nil
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
@ -2052,16 +2063,16 @@ func (a *ACL) AuthMethodRead(args *structs.ACLAuthMethodGetRequest, reply *struc
|
||||||
return a.srv.blockingQuery(&args.QueryOptions, &reply.QueryMeta,
|
return a.srv.blockingQuery(&args.QueryOptions, &reply.QueryMeta,
|
||||||
func(ws memdb.WatchSet, state *state.Store) error {
|
func(ws memdb.WatchSet, state *state.Store) error {
|
||||||
index, method, err := state.ACLAuthMethodGetByName(ws, args.AuthMethodName, &args.EnterpriseMeta)
|
index, method, err := state.ACLAuthMethodGetByName(ws, args.AuthMethodName, &args.EnterpriseMeta)
|
||||||
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
if method != nil {
|
reply.Index, reply.AuthMethod = index, method
|
||||||
_ = a.enterpriseAuthMethodTypeValidation(method.Type)
|
if method == nil {
|
||||||
|
return errNotFound
|
||||||
}
|
}
|
||||||
|
|
||||||
reply.Index, reply.AuthMethod = index, method
|
_ = a.enterpriseAuthMethodTypeValidation(method.Type)
|
||||||
return nil
|
return nil
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
|
@ -205,12 +205,10 @@ func (c *ConfigEntry) Get(args *structs.ConfigEntryQuery, reply *structs.ConfigE
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
reply.Index = index
|
reply.Index, reply.Entry = index, entry
|
||||||
if entry == nil {
|
if entry == nil {
|
||||||
return nil
|
return errNotFound
|
||||||
}
|
}
|
||||||
|
|
||||||
reply.Entry = entry
|
|
||||||
return nil
|
return nil
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,6 +1,7 @@
|
||||||
package consul
|
package consul
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
"os"
|
"os"
|
||||||
"sort"
|
"sort"
|
||||||
|
@ -9,6 +10,7 @@ import (
|
||||||
|
|
||||||
msgpackrpc "github.com/hashicorp/consul-net-rpc/net-rpc-msgpackrpc"
|
msgpackrpc "github.com/hashicorp/consul-net-rpc/net-rpc-msgpackrpc"
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
|
"golang.org/x/sync/errgroup"
|
||||||
|
|
||||||
"github.com/hashicorp/consul/acl"
|
"github.com/hashicorp/consul/acl"
|
||||||
"github.com/hashicorp/consul/agent/structs"
|
"github.com/hashicorp/consul/agent/structs"
|
||||||
|
@ -302,6 +304,71 @@ func TestConfigEntry_Get(t *testing.T) {
|
||||||
require.Equal(t, structs.ServiceDefaults, serviceConf.Kind)
|
require.Equal(t, structs.ServiceDefaults, serviceConf.Kind)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestConfigEntry_Get_BlockOnNonExistent(t *testing.T) {
|
||||||
|
if testing.Short() {
|
||||||
|
t.Skip("too slow for testing.Short")
|
||||||
|
}
|
||||||
|
|
||||||
|
_, s1 := testServerWithConfig(t)
|
||||||
|
codec := rpcClient(t, s1)
|
||||||
|
store := s1.fsm.State()
|
||||||
|
|
||||||
|
entry := &structs.ServiceConfigEntry{
|
||||||
|
Kind: structs.ServiceDefaults,
|
||||||
|
Name: "alpha",
|
||||||
|
}
|
||||||
|
require.NoError(t, store.EnsureConfigEntry(1, entry))
|
||||||
|
|
||||||
|
ctx, cancel := context.WithCancel(context.Background())
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
var count int
|
||||||
|
|
||||||
|
g, ctx := errgroup.WithContext(ctx)
|
||||||
|
g.Go(func() error {
|
||||||
|
args := structs.ConfigEntryQuery{
|
||||||
|
Kind: structs.ServiceDefaults,
|
||||||
|
Name: "does-not-exist",
|
||||||
|
}
|
||||||
|
args.QueryOptions.MaxQueryTime = time.Second
|
||||||
|
|
||||||
|
for ctx.Err() == nil {
|
||||||
|
var out structs.ConfigEntryResponse
|
||||||
|
|
||||||
|
err := msgpackrpc.CallWithCodec(codec, "ConfigEntry.Get", &args, &out)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
t.Log("blocking query index", out.QueryMeta.Index, out.Entry)
|
||||||
|
count++
|
||||||
|
args.QueryOptions.MinQueryIndex = out.QueryMeta.Index
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
|
||||||
|
g.Go(func() error {
|
||||||
|
for i := uint64(0); i < 200; i++ {
|
||||||
|
time.Sleep(5 * time.Millisecond)
|
||||||
|
entry := &structs.ServiceConfigEntry{
|
||||||
|
Kind: structs.ServiceDefaults,
|
||||||
|
Name: fmt.Sprintf("other%d", i),
|
||||||
|
}
|
||||||
|
if err := store.EnsureConfigEntry(i+2, entry); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
cancel()
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
|
||||||
|
require.NoError(t, g.Wait())
|
||||||
|
// The test is a bit racy because of the timing of the two goroutines, so
|
||||||
|
// we relax the check for the count to be within a small range.
|
||||||
|
if count < 2 || count > 3 {
|
||||||
|
t.Fatalf("expected count to be 2 or 3, got %d", count)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func TestConfigEntry_Get_ACLDeny(t *testing.T) {
|
func TestConfigEntry_Get_ACLDeny(t *testing.T) {
|
||||||
if testing.Short() {
|
if testing.Short() {
|
||||||
t.Skip("too slow for testing.Short")
|
t.Skip("too slow for testing.Short")
|
||||||
|
|
|
@ -267,6 +267,7 @@ func (c *Coordinate) Node(args *structs.NodeSpecificRequest, reply *structs.Inde
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
reply.Index, reply.Coordinates = index, coords
|
reply.Index, reply.Coordinates = index, coords
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
|
@ -8,6 +8,7 @@ import (
|
||||||
"github.com/mitchellh/hashstructure"
|
"github.com/mitchellh/hashstructure"
|
||||||
"github.com/mitchellh/mapstructure"
|
"github.com/mitchellh/mapstructure"
|
||||||
|
|
||||||
|
"github.com/hashicorp/consul/agent/configentry"
|
||||||
"github.com/hashicorp/consul/agent/connect"
|
"github.com/hashicorp/consul/agent/connect"
|
||||||
"github.com/hashicorp/consul/agent/structs"
|
"github.com/hashicorp/consul/agent/structs"
|
||||||
)
|
)
|
||||||
|
@ -37,7 +38,7 @@ type CompileRequest struct {
|
||||||
// overridden for any resolver in the compiled chain.
|
// overridden for any resolver in the compiled chain.
|
||||||
OverrideConnectTimeout time.Duration
|
OverrideConnectTimeout time.Duration
|
||||||
|
|
||||||
Entries *structs.DiscoveryChainConfigEntries
|
Entries *configentry.DiscoveryChainSet
|
||||||
}
|
}
|
||||||
|
|
||||||
// Compile assembles a discovery chain in the form of a graph of nodes using
|
// Compile assembles a discovery chain in the form of a graph of nodes using
|
||||||
|
@ -131,7 +132,7 @@ type compiler struct {
|
||||||
// config entries that are being compiled (will be mutated during compilation)
|
// config entries that are being compiled (will be mutated during compilation)
|
||||||
//
|
//
|
||||||
// This is an INPUT field.
|
// This is an INPUT field.
|
||||||
entries *structs.DiscoveryChainConfigEntries
|
entries *configentry.DiscoveryChainSet
|
||||||
|
|
||||||
// resolvers is initially seeded by copying the provided entries.Resolvers
|
// resolvers is initially seeded by copying the provided entries.Resolvers
|
||||||
// map and default resolvers are added as they are needed.
|
// map and default resolvers are added as they are needed.
|
||||||
|
|
|
@ -4,13 +4,15 @@ import (
|
||||||
"testing"
|
"testing"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
|
||||||
|
"github.com/hashicorp/consul/agent/configentry"
|
||||||
"github.com/hashicorp/consul/agent/connect"
|
"github.com/hashicorp/consul/agent/connect"
|
||||||
"github.com/hashicorp/consul/agent/structs"
|
"github.com/hashicorp/consul/agent/structs"
|
||||||
"github.com/stretchr/testify/require"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
type compileTestCase struct {
|
type compileTestCase struct {
|
||||||
entries *structs.DiscoveryChainConfigEntries
|
entries *configentry.DiscoveryChainSet
|
||||||
setup func(req *CompileRequest)
|
setup func(req *CompileRequest)
|
||||||
expect *structs.CompiledDiscoveryChain
|
expect *structs.CompiledDiscoveryChain
|
||||||
// expectIsDefault tests behavior of CompiledDiscoveryChain.IsDefault()
|
// expectIsDefault tests behavior of CompiledDiscoveryChain.IsDefault()
|
||||||
|
@ -2645,7 +2647,7 @@ func newSimpleRoute(name string, muts ...func(*structs.ServiceRoute)) structs.Se
|
||||||
return r
|
return r
|
||||||
}
|
}
|
||||||
|
|
||||||
func setGlobalProxyProtocol(entries *structs.DiscoveryChainConfigEntries, protocol string) {
|
func setGlobalProxyProtocol(entries *configentry.DiscoveryChainSet, protocol string) {
|
||||||
entries.AddProxyDefaults(&structs.ProxyConfigEntry{
|
entries.AddProxyDefaults(&structs.ProxyConfigEntry{
|
||||||
Kind: structs.ProxyDefaults,
|
Kind: structs.ProxyDefaults,
|
||||||
Name: structs.ProxyConfigGlobal,
|
Name: structs.ProxyConfigGlobal,
|
||||||
|
@ -2655,7 +2657,7 @@ func setGlobalProxyProtocol(entries *structs.DiscoveryChainConfigEntries, protoc
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
func setServiceProtocol(entries *structs.DiscoveryChainConfigEntries, name, protocol string) {
|
func setServiceProtocol(entries *configentry.DiscoveryChainSet, name, protocol string) {
|
||||||
entries.AddServices(&structs.ServiceConfigEntry{
|
entries.AddServices(&structs.ServiceConfigEntry{
|
||||||
Kind: structs.ServiceDefaults,
|
Kind: structs.ServiceDefaults,
|
||||||
Name: name,
|
Name: name,
|
||||||
|
@ -2663,8 +2665,8 @@ func setServiceProtocol(entries *structs.DiscoveryChainConfigEntries, name, prot
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
func newEntries() *structs.DiscoveryChainConfigEntries {
|
func newEntries() *configentry.DiscoveryChainSet {
|
||||||
return &structs.DiscoveryChainConfigEntries{
|
return &configentry.DiscoveryChainSet{
|
||||||
Routers: make(map[structs.ServiceID]*structs.ServiceRouterConfigEntry),
|
Routers: make(map[structs.ServiceID]*structs.ServiceRouterConfigEntry),
|
||||||
Splitters: make(map[structs.ServiceID]*structs.ServiceSplitterConfigEntry),
|
Splitters: make(map[structs.ServiceID]*structs.ServiceSplitterConfigEntry),
|
||||||
Resolvers: make(map[structs.ServiceID]*structs.ServiceResolverConfigEntry),
|
Resolvers: make(map[structs.ServiceID]*structs.ServiceResolverConfigEntry),
|
||||||
|
|
|
@ -1,9 +1,11 @@
|
||||||
package discoverychain
|
package discoverychain
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"github.com/hashicorp/consul/agent/structs"
|
|
||||||
"github.com/mitchellh/go-testing-interface"
|
"github.com/mitchellh/go-testing-interface"
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
|
|
||||||
|
"github.com/hashicorp/consul/agent/configentry"
|
||||||
|
"github.com/hashicorp/consul/agent/structs"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestCompileConfigEntries(t testing.T,
|
func TestCompileConfigEntries(t testing.T,
|
||||||
|
@ -13,7 +15,7 @@ func TestCompileConfigEntries(t testing.T,
|
||||||
evaluateInDatacenter string,
|
evaluateInDatacenter string,
|
||||||
evaluateInTrustDomain string,
|
evaluateInTrustDomain string,
|
||||||
setup func(req *CompileRequest), entries ...structs.ConfigEntry) *structs.CompiledDiscoveryChain {
|
setup func(req *CompileRequest), entries ...structs.ConfigEntry) *structs.CompiledDiscoveryChain {
|
||||||
set := structs.NewDiscoveryChainConfigEntries()
|
set := configentry.NewDiscoveryChainSet()
|
||||||
|
|
||||||
set.AddEntries(entries...)
|
set.AddEntries(entries...)
|
||||||
|
|
||||||
|
|
|
@ -122,12 +122,10 @@ func (c *FederationState) Get(args *structs.FederationStateQuery, reply *structs
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
reply.Index = index
|
reply.Index, reply.State = index, fedState
|
||||||
if fedState == nil {
|
if fedState == nil {
|
||||||
return nil
|
return errNotFound
|
||||||
}
|
}
|
||||||
|
|
||||||
reply.State = fedState
|
|
||||||
return nil
|
return nil
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
|
@ -160,18 +160,13 @@ func (k *KVS) Get(args *structs.KeyRequest, reply *structs.IndexedDirEntries) er
|
||||||
}
|
}
|
||||||
|
|
||||||
if ent == nil {
|
if ent == nil {
|
||||||
// Must provide non-zero index to prevent blocking
|
|
||||||
// Index 1 is impossible anyways (due to Raft internals)
|
|
||||||
if index == 0 {
|
|
||||||
reply.Index = 1
|
|
||||||
} else {
|
|
||||||
reply.Index = index
|
reply.Index = index
|
||||||
}
|
|
||||||
reply.Entries = nil
|
reply.Entries = nil
|
||||||
} else {
|
return errNotFound
|
||||||
|
}
|
||||||
|
|
||||||
reply.Index = ent.ModifyIndex
|
reply.Index = ent.ModifyIndex
|
||||||
reply.Entries = structs.DirEntries{ent}
|
reply.Entries = structs.DirEntries{ent}
|
||||||
}
|
|
||||||
return nil
|
return nil
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
|
@ -253,28 +253,24 @@ func (c *CAManager) initializeCAConfig() (*structs.CAConfiguration, error) {
|
||||||
return config, nil
|
return config, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// parseCARoot returns a filled-in structs.CARoot from a raw PEM value.
|
// newCARoot returns a filled-in structs.CARoot from a raw PEM value.
|
||||||
func parseCARoot(pemValue, provider, clusterID string) (*structs.CARoot, error) {
|
func newCARoot(pemValue, provider, clusterID string) (*structs.CARoot, error) {
|
||||||
id, err := connect.CalculateCertFingerprint(pemValue)
|
primaryCert, err := connect.ParseCert(pemValue)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("error parsing root fingerprint: %v", err)
|
return nil, err
|
||||||
}
|
}
|
||||||
rootCert, err := connect.ParseCert(pemValue)
|
keyType, keyBits, err := connect.KeyInfoFromCert(primaryCert)
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("error parsing root cert: %v", err)
|
|
||||||
}
|
|
||||||
keyType, keyBits, err := connect.KeyInfoFromCert(rootCert)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("error extracting root key info: %v", err)
|
return nil, fmt.Errorf("error extracting root key info: %v", err)
|
||||||
}
|
}
|
||||||
return &structs.CARoot{
|
return &structs.CARoot{
|
||||||
ID: id,
|
ID: connect.CalculateCertFingerprint(primaryCert.Raw),
|
||||||
Name: fmt.Sprintf("%s CA Root Cert", strings.Title(provider)),
|
Name: fmt.Sprintf("%s CA Primary Cert", strings.Title(provider)),
|
||||||
SerialNumber: rootCert.SerialNumber.Uint64(),
|
SerialNumber: primaryCert.SerialNumber.Uint64(),
|
||||||
SigningKeyID: connect.EncodeSigningKeyID(rootCert.SubjectKeyId),
|
SigningKeyID: connect.EncodeSigningKeyID(primaryCert.SubjectKeyId),
|
||||||
ExternalTrustDomain: clusterID,
|
ExternalTrustDomain: clusterID,
|
||||||
NotBefore: rootCert.NotBefore,
|
NotBefore: primaryCert.NotBefore,
|
||||||
NotAfter: rootCert.NotAfter,
|
NotAfter: primaryCert.NotAfter,
|
||||||
RootCert: pemValue,
|
RootCert: pemValue,
|
||||||
PrivateKeyType: keyType,
|
PrivateKeyType: keyType,
|
||||||
PrivateKeyBits: keyBits,
|
PrivateKeyBits: keyBits,
|
||||||
|
@ -435,7 +431,7 @@ func (c *CAManager) secondaryInitialize(provider ca.Provider, conf *structs.CACo
|
||||||
}
|
}
|
||||||
var roots structs.IndexedCARoots
|
var roots structs.IndexedCARoots
|
||||||
if err := c.delegate.forwardDC("ConnectCA.Roots", c.serverConf.PrimaryDatacenter, &args, &roots); err != nil {
|
if err := c.delegate.forwardDC("ConnectCA.Roots", c.serverConf.PrimaryDatacenter, &args, &roots); err != nil {
|
||||||
return err
|
return fmt.Errorf("failed to get CA roots from primary DC: %w", err)
|
||||||
}
|
}
|
||||||
c.secondarySetPrimaryRoots(roots)
|
c.secondarySetPrimaryRoots(roots)
|
||||||
|
|
||||||
|
@ -487,12 +483,12 @@ func (c *CAManager) primaryInitialize(provider ca.Provider, conf *structs.CAConf
|
||||||
return fmt.Errorf("error generating CA root certificate: %v", err)
|
return fmt.Errorf("error generating CA root certificate: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
rootCA, err := parseCARoot(root.PEM, conf.Provider, conf.ClusterID)
|
rootCA, err := newCARoot(root.PEM, conf.Provider, conf.ClusterID)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
// Also create the intermediate CA, which is the one that actually signs leaf certs
|
// TODO: https://github.com/hashicorp/consul/issues/12386
|
||||||
interPEM, err := provider.GenerateIntermediate()
|
interPEM, err := provider.GenerateIntermediate()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("error generating intermediate cert: %v", err)
|
return fmt.Errorf("error generating intermediate cert: %v", err)
|
||||||
|
@ -887,7 +883,7 @@ func (c *CAManager) primaryUpdateRootCA(newProvider ca.Provider, args *structs.C
|
||||||
}
|
}
|
||||||
|
|
||||||
newRootPEM := providerRoot.PEM
|
newRootPEM := providerRoot.PEM
|
||||||
newActiveRoot, err := parseCARoot(newRootPEM, args.Config.Provider, args.Config.ClusterID)
|
newActiveRoot, err := newCARoot(newRootPEM, args.Config.Provider, args.Config.ClusterID)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -940,7 +936,7 @@ func (c *CAManager) primaryUpdateRootCA(newProvider ca.Provider, args *structs.C
|
||||||
// get a cross-signed certificate.
|
// get a cross-signed certificate.
|
||||||
// 3. Take the active root for the new provider and append the intermediate from step 2
|
// 3. Take the active root for the new provider and append the intermediate from step 2
|
||||||
// to its list of intermediates.
|
// to its list of intermediates.
|
||||||
// TODO: this cert is already parsed once in parseCARoot, could we remove the second parse?
|
// TODO: this cert is already parsed once in newCARoot, could we remove the second parse?
|
||||||
newRoot, err := connect.ParseCert(newRootPEM)
|
newRoot, err := connect.ParseCert(newRootPEM)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
|
@ -980,6 +976,7 @@ func (c *CAManager) primaryUpdateRootCA(newProvider ca.Provider, args *structs.C
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// TODO: https://github.com/hashicorp/consul/issues/12386
|
||||||
intermediate, err := newProvider.GenerateIntermediate()
|
intermediate, err := newProvider.GenerateIntermediate()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
|
|
|
@ -12,6 +12,7 @@ import (
|
||||||
"fmt"
|
"fmt"
|
||||||
"math/big"
|
"math/big"
|
||||||
"net/url"
|
"net/url"
|
||||||
|
"strings"
|
||||||
"testing"
|
"testing"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
@ -82,7 +83,6 @@ func TestCAManager_Initialize_Vault_Secondary_SharedVault(t *testing.T) {
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
defer serverDC2.Shutdown()
|
|
||||||
joinWAN(t, serverDC2, serverDC1)
|
joinWAN(t, serverDC2, serverDC1)
|
||||||
testrpc.WaitForActiveCARoot(t, serverDC2.RPC, "dc2", nil)
|
testrpc.WaitForActiveCARoot(t, serverDC2.RPC, "dc2", nil)
|
||||||
|
|
||||||
|
@ -98,15 +98,26 @@ func TestCAManager_Initialize_Vault_Secondary_SharedVault(t *testing.T) {
|
||||||
}
|
}
|
||||||
|
|
||||||
func verifyLeafCert(t *testing.T, root *structs.CARoot, leafCertPEM string) {
|
func verifyLeafCert(t *testing.T, root *structs.CARoot, leafCertPEM string) {
|
||||||
|
t.Helper()
|
||||||
|
roots := structs.IndexedCARoots{
|
||||||
|
ActiveRootID: root.ID,
|
||||||
|
Roots: []*structs.CARoot{root},
|
||||||
|
}
|
||||||
|
verifyLeafCertWithRoots(t, roots, leafCertPEM)
|
||||||
|
}
|
||||||
|
|
||||||
|
func verifyLeafCertWithRoots(t *testing.T, roots structs.IndexedCARoots, leafCertPEM string) {
|
||||||
t.Helper()
|
t.Helper()
|
||||||
leaf, intermediates, err := connect.ParseLeafCerts(leafCertPEM)
|
leaf, intermediates, err := connect.ParseLeafCerts(leafCertPEM)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
pool := x509.NewCertPool()
|
pool := x509.NewCertPool()
|
||||||
ok := pool.AppendCertsFromPEM([]byte(root.RootCert))
|
for _, r := range roots.Roots {
|
||||||
|
ok := pool.AppendCertsFromPEM([]byte(r.RootCert))
|
||||||
if !ok {
|
if !ok {
|
||||||
t.Fatalf("Failed to add root CA PEM to cert pool")
|
t.Fatalf("Failed to add root CA PEM to cert pool")
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// verify with intermediates from leaf CertPEM
|
// verify with intermediates from leaf CertPEM
|
||||||
_, err = leaf.Verify(x509.VerifyOptions{
|
_, err = leaf.Verify(x509.VerifyOptions{
|
||||||
|
@ -118,11 +129,13 @@ func verifyLeafCert(t *testing.T, root *structs.CARoot, leafCertPEM string) {
|
||||||
|
|
||||||
// verify with intermediates from the CARoot
|
// verify with intermediates from the CARoot
|
||||||
intermediates = x509.NewCertPool()
|
intermediates = x509.NewCertPool()
|
||||||
for _, intermediate := range root.IntermediateCerts {
|
for _, r := range roots.Roots {
|
||||||
|
for _, intermediate := range r.IntermediateCerts {
|
||||||
c, err := connect.ParseCert(intermediate)
|
c, err := connect.ParseCert(intermediate)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
intermediates.AddCert(c)
|
intermediates.AddCert(c)
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
_, err = leaf.Verify(x509.VerifyOptions{
|
_, err = leaf.Verify(x509.VerifyOptions{
|
||||||
Roots: pool,
|
Roots: pool,
|
||||||
|
@ -618,7 +631,7 @@ func TestCAManager_Initialize_Vault_WithIntermediateAsPrimaryCA(t *testing.T) {
|
||||||
generateExternalRootCA(t, vclient)
|
generateExternalRootCA(t, vclient)
|
||||||
|
|
||||||
meshRootPath := "pki-root"
|
meshRootPath := "pki-root"
|
||||||
primaryCert := setupPrimaryCA(t, vclient, meshRootPath)
|
primaryCert := setupPrimaryCA(t, vclient, meshRootPath, "")
|
||||||
|
|
||||||
_, s1 := testServerWithConfig(t, func(c *Config) {
|
_, s1 := testServerWithConfig(t, func(c *Config) {
|
||||||
c.CAConfig = &structs.CAConfiguration{
|
c.CAConfig = &structs.CAConfiguration{
|
||||||
|
@ -628,14 +641,9 @@ func TestCAManager_Initialize_Vault_WithIntermediateAsPrimaryCA(t *testing.T) {
|
||||||
"Token": vault.RootToken,
|
"Token": vault.RootToken,
|
||||||
"RootPKIPath": meshRootPath,
|
"RootPKIPath": meshRootPath,
|
||||||
"IntermediatePKIPath": "pki-intermediate/",
|
"IntermediatePKIPath": "pki-intermediate/",
|
||||||
// TODO: there are failures to init the CA system if these are not set
|
|
||||||
// to the values of the already initialized CA.
|
|
||||||
"PrivateKeyType": "ec",
|
|
||||||
"PrivateKeyBits": 256,
|
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
defer s1.Shutdown()
|
|
||||||
|
|
||||||
runStep(t, "check primary DC", func(t *testing.T) {
|
runStep(t, "check primary DC", func(t *testing.T) {
|
||||||
testrpc.WaitForTestAgent(t, s1.RPC, "dc1")
|
testrpc.WaitForTestAgent(t, s1.RPC, "dc1")
|
||||||
|
@ -665,10 +673,6 @@ func TestCAManager_Initialize_Vault_WithIntermediateAsPrimaryCA(t *testing.T) {
|
||||||
"Token": vault.RootToken,
|
"Token": vault.RootToken,
|
||||||
"RootPKIPath": meshRootPath,
|
"RootPKIPath": meshRootPath,
|
||||||
"IntermediatePKIPath": "pki-secondary/",
|
"IntermediatePKIPath": "pki-secondary/",
|
||||||
// TODO: there are failures to init the CA system if these are not set
|
|
||||||
// to the values of the already initialized CA.
|
|
||||||
"PrivateKeyType": "ec",
|
|
||||||
"PrivateKeyBits": 256,
|
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
|
@ -704,10 +708,224 @@ func getLeafCert(t *testing.T, codec rpc.ClientCodec, trustDomain string, dc str
|
||||||
cert := structs.IssuedCert{}
|
cert := structs.IssuedCert{}
|
||||||
err = msgpackrpc.CallWithCodec(codec, "ConnectCA.Sign", &req, &cert)
|
err = msgpackrpc.CallWithCodec(codec, "ConnectCA.Sign", &req, &cert)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
return cert.CertPEM
|
return cert.CertPEM
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestCAManager_Initialize_Vault_WithExternalTrustedCA(t *testing.T) {
|
||||||
|
if testing.Short() {
|
||||||
|
t.Skip("too slow for testing.Short")
|
||||||
|
}
|
||||||
|
ca.SkipIfVaultNotPresent(t)
|
||||||
|
|
||||||
|
vault := ca.NewTestVaultServer(t)
|
||||||
|
vclient := vault.Client()
|
||||||
|
rootPEM := generateExternalRootCA(t, vclient)
|
||||||
|
|
||||||
|
primaryCAPath := "pki-primary"
|
||||||
|
primaryCert := setupPrimaryCA(t, vclient, primaryCAPath, rootPEM)
|
||||||
|
|
||||||
|
_, serverDC1 := testServerWithConfig(t, func(c *Config) {
|
||||||
|
c.CAConfig = &structs.CAConfiguration{
|
||||||
|
Provider: "vault",
|
||||||
|
Config: map[string]interface{}{
|
||||||
|
"Address": vault.Addr,
|
||||||
|
"Token": vault.RootToken,
|
||||||
|
"RootPKIPath": primaryCAPath,
|
||||||
|
"IntermediatePKIPath": "pki-intermediate/",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
})
|
||||||
|
testrpc.WaitForTestAgent(t, serverDC1.RPC, "dc1")
|
||||||
|
|
||||||
|
var origLeaf string
|
||||||
|
roots := structs.IndexedCARoots{}
|
||||||
|
runStep(t, "verify primary DC", func(t *testing.T) {
|
||||||
|
codec := rpcClient(t, serverDC1)
|
||||||
|
err := msgpackrpc.CallWithCodec(codec, "ConnectCA.Roots", &structs.DCSpecificRequest{}, &roots)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Len(t, roots.Roots, 1)
|
||||||
|
require.Equal(t, primaryCert, roots.Roots[0].RootCert)
|
||||||
|
require.Contains(t, roots.Roots[0].RootCert, rootPEM)
|
||||||
|
|
||||||
|
leafCert := getLeafCert(t, codec, roots.TrustDomain, "dc1")
|
||||||
|
verifyLeafCert(t, roots.Active(), leafCert)
|
||||||
|
origLeaf = leafCert
|
||||||
|
})
|
||||||
|
|
||||||
|
_, serverDC2 := testServerWithConfig(t, func(c *Config) {
|
||||||
|
c.Datacenter = "dc2"
|
||||||
|
c.PrimaryDatacenter = "dc1"
|
||||||
|
c.CAConfig = &structs.CAConfiguration{
|
||||||
|
Provider: "vault",
|
||||||
|
Config: map[string]interface{}{
|
||||||
|
"Address": vault.Addr,
|
||||||
|
"Token": vault.RootToken,
|
||||||
|
"RootPKIPath": "should-be-ignored",
|
||||||
|
"IntermediatePKIPath": "pki-secondary/",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
var origLeafSecondary string
|
||||||
|
runStep(t, "start secondary DC", func(t *testing.T) {
|
||||||
|
joinWAN(t, serverDC2, serverDC1)
|
||||||
|
testrpc.WaitForActiveCARoot(t, serverDC2.RPC, "dc2", nil)
|
||||||
|
|
||||||
|
codec := rpcClient(t, serverDC2)
|
||||||
|
roots = structs.IndexedCARoots{}
|
||||||
|
err := msgpackrpc.CallWithCodec(codec, "ConnectCA.Roots", &structs.DCSpecificRequest{}, &roots)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Len(t, roots.Roots, 1)
|
||||||
|
|
||||||
|
leafPEM := getLeafCert(t, codec, roots.TrustDomain, "dc2")
|
||||||
|
verifyLeafCert(t, roots.Roots[0], leafPEM)
|
||||||
|
origLeafSecondary = leafPEM
|
||||||
|
})
|
||||||
|
|
||||||
|
runStep(t, "renew leaf signing CA in primary", func(t *testing.T) {
|
||||||
|
previous := serverDC1.caManager.getLeafSigningCertFromRoot(roots.Active())
|
||||||
|
|
||||||
|
renewLeafSigningCert(t, serverDC1.caManager, serverDC1.caManager.primaryRenewIntermediate)
|
||||||
|
|
||||||
|
codec := rpcClient(t, serverDC1)
|
||||||
|
roots = structs.IndexedCARoots{}
|
||||||
|
err := msgpackrpc.CallWithCodec(codec, "ConnectCA.Roots", &structs.DCSpecificRequest{}, &roots)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Len(t, roots.Roots, 1)
|
||||||
|
require.Len(t, roots.Roots[0].IntermediateCerts, 2)
|
||||||
|
|
||||||
|
newCert := serverDC1.caManager.getLeafSigningCertFromRoot(roots.Active())
|
||||||
|
require.NotEqual(t, previous, newCert)
|
||||||
|
|
||||||
|
leafPEM := getLeafCert(t, codec, roots.TrustDomain, "dc1")
|
||||||
|
verifyLeafCert(t, roots.Roots[0], leafPEM)
|
||||||
|
|
||||||
|
// original certs from old signing cert should still verify
|
||||||
|
verifyLeafCert(t, roots.Roots[0], origLeaf)
|
||||||
|
})
|
||||||
|
|
||||||
|
runStep(t, "renew leaf signing CA in secondary", func(t *testing.T) {
|
||||||
|
previous := serverDC2.caManager.getLeafSigningCertFromRoot(roots.Active())
|
||||||
|
|
||||||
|
renewLeafSigningCert(t, serverDC2.caManager, serverDC2.caManager.secondaryRequestNewSigningCert)
|
||||||
|
|
||||||
|
codec := rpcClient(t, serverDC2)
|
||||||
|
roots = structs.IndexedCARoots{}
|
||||||
|
err := msgpackrpc.CallWithCodec(codec, "ConnectCA.Roots", &structs.DCSpecificRequest{}, &roots)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Len(t, roots.Roots, 1)
|
||||||
|
// one intermediate from primary, two from secondary
|
||||||
|
require.Len(t, roots.Roots[0].IntermediateCerts, 3)
|
||||||
|
|
||||||
|
newCert := serverDC1.caManager.getLeafSigningCertFromRoot(roots.Active())
|
||||||
|
require.NotEqual(t, previous, newCert)
|
||||||
|
|
||||||
|
leafPEM := getLeafCert(t, codec, roots.TrustDomain, "dc2")
|
||||||
|
verifyLeafCert(t, roots.Roots[0], leafPEM)
|
||||||
|
|
||||||
|
// original certs from old signing cert should still verify
|
||||||
|
verifyLeafCert(t, roots.Roots[0], origLeaf)
|
||||||
|
})
|
||||||
|
|
||||||
|
runStep(t, "rotate root by changing the provider", func(t *testing.T) {
|
||||||
|
codec := rpcClient(t, serverDC1)
|
||||||
|
req := &structs.CARequest{
|
||||||
|
Op: structs.CAOpSetConfig,
|
||||||
|
Config: &structs.CAConfiguration{
|
||||||
|
Provider: "consul",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
var resp error
|
||||||
|
err := msgpackrpc.CallWithCodec(codec, "ConnectCA.ConfigurationSet", req, &resp)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Nil(t, resp)
|
||||||
|
|
||||||
|
roots = structs.IndexedCARoots{}
|
||||||
|
err = msgpackrpc.CallWithCodec(codec, "ConnectCA.Roots", &structs.DCSpecificRequest{}, &roots)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Len(t, roots.Roots, 2)
|
||||||
|
active := roots.Active()
|
||||||
|
require.Len(t, active.IntermediateCerts, 1)
|
||||||
|
|
||||||
|
leafPEM := getLeafCert(t, codec, roots.TrustDomain, "dc1")
|
||||||
|
verifyLeafCert(t, roots.Active(), leafPEM)
|
||||||
|
|
||||||
|
// original certs from old root cert should still verify
|
||||||
|
verifyLeafCertWithRoots(t, roots, origLeaf)
|
||||||
|
|
||||||
|
// original certs from secondary should still verify
|
||||||
|
rootsSecondary := structs.IndexedCARoots{}
|
||||||
|
r := &structs.DCSpecificRequest{Datacenter: "dc2"}
|
||||||
|
err = msgpackrpc.CallWithCodec(codec, "ConnectCA.Roots", r, &rootsSecondary)
|
||||||
|
require.NoError(t, err)
|
||||||
|
verifyLeafCertWithRoots(t, rootsSecondary, origLeafSecondary)
|
||||||
|
})
|
||||||
|
|
||||||
|
runStep(t, "rotate to a different external root", func(t *testing.T) {
|
||||||
|
setupPrimaryCA(t, vclient, "pki-primary-2/", rootPEM)
|
||||||
|
|
||||||
|
codec := rpcClient(t, serverDC1)
|
||||||
|
req := &structs.CARequest{
|
||||||
|
Op: structs.CAOpSetConfig,
|
||||||
|
Config: &structs.CAConfiguration{
|
||||||
|
Provider: "vault",
|
||||||
|
Config: map[string]interface{}{
|
||||||
|
"Address": vault.Addr,
|
||||||
|
"Token": vault.RootToken,
|
||||||
|
"RootPKIPath": "pki-primary-2/",
|
||||||
|
"IntermediatePKIPath": "pki-intermediate-2/",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
var resp error
|
||||||
|
err := msgpackrpc.CallWithCodec(codec, "ConnectCA.ConfigurationSet", req, &resp)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Nil(t, resp)
|
||||||
|
|
||||||
|
roots = structs.IndexedCARoots{}
|
||||||
|
err = msgpackrpc.CallWithCodec(codec, "ConnectCA.Roots", &structs.DCSpecificRequest{}, &roots)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Len(t, roots.Roots, 3)
|
||||||
|
active := roots.Active()
|
||||||
|
require.Len(t, active.IntermediateCerts, 2)
|
||||||
|
|
||||||
|
leafPEM := getLeafCert(t, codec, roots.TrustDomain, "dc1")
|
||||||
|
verifyLeafCert(t, roots.Active(), leafPEM)
|
||||||
|
|
||||||
|
// original certs from old root cert should still verify
|
||||||
|
verifyLeafCertWithRoots(t, roots, origLeaf)
|
||||||
|
|
||||||
|
// original certs from secondary should still verify
|
||||||
|
rootsSecondary := structs.IndexedCARoots{}
|
||||||
|
r := &structs.DCSpecificRequest{Datacenter: "dc2"}
|
||||||
|
err = msgpackrpc.CallWithCodec(codec, "ConnectCA.Roots", r, &rootsSecondary)
|
||||||
|
require.NoError(t, err)
|
||||||
|
verifyLeafCertWithRoots(t, rootsSecondary, origLeafSecondary)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// renewLeafSigningCert mimics RenewIntermediate. This is unfortunate, but
|
||||||
|
// necessary for now as there is no easy way to invoke that logic unconditionally.
|
||||||
|
// Currently, it requires patching values and polling for the operation to
|
||||||
|
// complete, which adds a lot of distractions to a test case.
|
||||||
|
// With this function we can instead unconditionally rotate the leaf signing cert
|
||||||
|
// synchronously.
|
||||||
|
func renewLeafSigningCert(t *testing.T, manager *CAManager, fn func(ca.Provider, *structs.CARoot) error) {
|
||||||
|
t.Helper()
|
||||||
|
provider, _ := manager.getCAProvider()
|
||||||
|
|
||||||
|
store := manager.delegate.State()
|
||||||
|
_, root, err := store.CARootActive(nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
activeRoot := root.Clone()
|
||||||
|
err = fn(provider, activeRoot)
|
||||||
|
require.NoError(t, err)
|
||||||
|
err = manager.persistNewRootAndConfig(provider, activeRoot, nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
manager.setCAProvider(provider, activeRoot)
|
||||||
|
}
|
||||||
|
|
||||||
func generateExternalRootCA(t *testing.T, client *vaultapi.Client) string {
|
func generateExternalRootCA(t *testing.T, client *vaultapi.Client) string {
|
||||||
t.Helper()
|
t.Helper()
|
||||||
err := client.Sys().Mount("corp", &vaultapi.MountInput{
|
err := client.Sys().Mount("corp", &vaultapi.MountInput{
|
||||||
|
@ -725,10 +943,10 @@ func generateExternalRootCA(t *testing.T, client *vaultapi.Client) string {
|
||||||
"ttl": "2400h",
|
"ttl": "2400h",
|
||||||
})
|
})
|
||||||
require.NoError(t, err, "failed to generate root")
|
require.NoError(t, err, "failed to generate root")
|
||||||
return resp.Data["certificate"].(string)
|
return ca.EnsureTrailingNewline(resp.Data["certificate"].(string))
|
||||||
}
|
}
|
||||||
|
|
||||||
func setupPrimaryCA(t *testing.T, client *vaultapi.Client, path string) string {
|
func setupPrimaryCA(t *testing.T, client *vaultapi.Client, path string, rootPEM string) string {
|
||||||
t.Helper()
|
t.Helper()
|
||||||
err := client.Sys().Mount(path, &vaultapi.MountInput{
|
err := client.Sys().Mount(path, &vaultapi.MountInput{
|
||||||
Type: "pki",
|
Type: "pki",
|
||||||
|
@ -756,9 +974,13 @@ func setupPrimaryCA(t *testing.T, client *vaultapi.Client, path string) string {
|
||||||
})
|
})
|
||||||
require.NoError(t, err, "failed to sign intermediate")
|
require.NoError(t, err, "failed to sign intermediate")
|
||||||
|
|
||||||
|
var buf strings.Builder
|
||||||
|
buf.WriteString(ca.EnsureTrailingNewline(intermediate.Data["certificate"].(string)))
|
||||||
|
buf.WriteString(ca.EnsureTrailingNewline(rootPEM))
|
||||||
|
|
||||||
_, err = client.Logical().Write(path+"/intermediate/set-signed", map[string]interface{}{
|
_, err = client.Logical().Write(path+"/intermediate/set-signed", map[string]interface{}{
|
||||||
"certificate": intermediate.Data["certificate"],
|
"certificate": buf.String(),
|
||||||
})
|
})
|
||||||
require.NoError(t, err, "failed to set signed intermediate")
|
require.NoError(t, err, "failed to set signed intermediate")
|
||||||
return ca.EnsureTrailingNewline(intermediate.Data["certificate"].(string))
|
return ca.EnsureTrailingNewline(buf.String())
|
||||||
}
|
}
|
||||||
|
|
|
@ -15,11 +15,13 @@ import (
|
||||||
msgpackrpc "github.com/hashicorp/consul-net-rpc/net-rpc-msgpackrpc"
|
msgpackrpc "github.com/hashicorp/consul-net-rpc/net-rpc-msgpackrpc"
|
||||||
uuid "github.com/hashicorp/go-uuid"
|
uuid "github.com/hashicorp/go-uuid"
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
|
"gotest.tools/v3/assert"
|
||||||
|
|
||||||
"github.com/hashicorp/consul/agent/connect"
|
"github.com/hashicorp/consul/agent/connect"
|
||||||
"github.com/hashicorp/consul/agent/connect/ca"
|
"github.com/hashicorp/consul/agent/connect/ca"
|
||||||
"github.com/hashicorp/consul/agent/structs"
|
"github.com/hashicorp/consul/agent/structs"
|
||||||
"github.com/hashicorp/consul/agent/token"
|
"github.com/hashicorp/consul/agent/token"
|
||||||
|
"github.com/hashicorp/consul/sdk/testutil"
|
||||||
"github.com/hashicorp/consul/sdk/testutil/retry"
|
"github.com/hashicorp/consul/sdk/testutil/retry"
|
||||||
"github.com/hashicorp/consul/testrpc"
|
"github.com/hashicorp/consul/testrpc"
|
||||||
)
|
)
|
||||||
|
@ -1246,74 +1248,122 @@ func TestConnectCA_ConfigurationSet_PersistsRoots(t *testing.T) {
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestParseCARoot(t *testing.T) {
|
func TestNewCARoot(t *testing.T) {
|
||||||
type test struct {
|
type testCase struct {
|
||||||
name string
|
name string
|
||||||
pem string
|
pem string
|
||||||
wantSerial uint64
|
expected *structs.CARoot
|
||||||
wantSigningKeyID string
|
expectedErr string
|
||||||
wantKeyType string
|
|
||||||
wantKeyBits int
|
|
||||||
wantErr bool
|
|
||||||
}
|
}
|
||||||
// Test certs generated with
|
|
||||||
// go run connect/certgen/certgen.go -out-dir /tmp/connect-certs -key-type ec -key-bits 384
|
run := func(t *testing.T, tc testCase) {
|
||||||
// for various key types. This does limit the exposure to formats that might
|
root, err := newCARoot(tc.pem, "provider-name", "cluster-id")
|
||||||
// exist in external certificates which can be used as Connect CAs.
|
if tc.expectedErr != "" {
|
||||||
// Specifically many other certs will have serial numbers that don't fit into
|
testutil.RequireErrorContains(t, err, tc.expectedErr)
|
||||||
// 64 bits but for reasons we truncate down to 64 bits which means our
|
|
||||||
// `SerialNumber` will not match the one reported by openssl. We should
|
|
||||||
// probably fix that at some point as it seems like a big footgun but it would
|
|
||||||
// be a breaking API change to change the type to not be a JSON number and
|
|
||||||
// JSON numbers don't even support the full range of a uint64...
|
|
||||||
tests := []test{
|
|
||||||
{"no cert", "", 0, "", "", 0, true},
|
|
||||||
{
|
|
||||||
name: "default cert",
|
|
||||||
// Watchout for indentations they will break PEM format
|
|
||||||
pem: readTestData(t, "cert-with-ec-256-key.pem"),
|
|
||||||
// Based on `openssl x509 -noout -text` report from the cert
|
|
||||||
wantSerial: 8341954965092507701,
|
|
||||||
wantSigningKeyID: "97:4D:17:81:64:F8:B4:AF:05:E8:6C:79:C5:40:3B:0E:3E:8B:C0:AE:38:51:54:8A:2F:05:DB:E3:E8:E4:24:EC",
|
|
||||||
wantKeyType: "ec",
|
|
||||||
wantKeyBits: 256,
|
|
||||||
wantErr: false,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "ec 384 cert",
|
|
||||||
// Watchout for indentations they will break PEM format
|
|
||||||
pem: readTestData(t, "cert-with-ec-384-key.pem"),
|
|
||||||
// Based on `openssl x509 -noout -text` report from the cert
|
|
||||||
wantSerial: 2935109425518279965,
|
|
||||||
wantSigningKeyID: "0B:A0:88:9B:DC:95:31:51:2E:3D:D4:F9:42:D0:6A:A0:62:46:82:D2:7C:22:E7:29:A9:AA:E8:A5:8C:CF:C7:42",
|
|
||||||
wantKeyType: "ec",
|
|
||||||
wantKeyBits: 384,
|
|
||||||
wantErr: false,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "rsa 4096 cert",
|
|
||||||
// Watchout for indentations they will break PEM format
|
|
||||||
pem: readTestData(t, "cert-with-rsa-4096-key.pem"),
|
|
||||||
// Based on `openssl x509 -noout -text` report from the cert
|
|
||||||
wantSerial: 5186695743100577491,
|
|
||||||
wantSigningKeyID: "92:FA:CC:97:57:1E:31:84:A2:33:DD:9B:6A:A8:7C:FC:BE:E2:94:CA:AC:B3:33:17:39:3B:B8:67:9B:DC:C1:08",
|
|
||||||
wantKeyType: "rsa",
|
|
||||||
wantKeyBits: 4096,
|
|
||||||
wantErr: false,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
for _, tt := range tests {
|
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
|
||||||
root, err := parseCARoot(tt.pem, "consul", "cluster")
|
|
||||||
if tt.wantErr {
|
|
||||||
require.Error(t, err)
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Equal(t, tt.wantSerial, root.SerialNumber)
|
assert.DeepEqual(t, root, tc.expected)
|
||||||
require.Equal(t, strings.ToLower(tt.wantSigningKeyID), root.SigningKeyID)
|
}
|
||||||
require.Equal(t, tt.wantKeyType, root.PrivateKeyType)
|
|
||||||
require.Equal(t, tt.wantKeyBits, root.PrivateKeyBits)
|
// Test certs can be generated with
|
||||||
|
// go run connect/certgen/certgen.go -out-dir /tmp/connect-certs -key-type ec -key-bits 384
|
||||||
|
// serial generated with:
|
||||||
|
// openssl x509 -noout -text
|
||||||
|
testCases := []testCase{
|
||||||
|
{
|
||||||
|
name: "no cert",
|
||||||
|
expectedErr: "no PEM-encoded data found",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "type=ec bits=256",
|
||||||
|
pem: readTestData(t, "cert-with-ec-256-key.pem"),
|
||||||
|
expected: &structs.CARoot{
|
||||||
|
ID: "c9:1b:24:e0:89:63:1a:ba:22:01:f4:cf:bc:f1:c0:36:b2:6b:6c:3d",
|
||||||
|
Name: "Provider-Name CA Primary Cert",
|
||||||
|
SerialNumber: 8341954965092507701,
|
||||||
|
SigningKeyID: "97:4d:17:81:64:f8:b4:af:05:e8:6c:79:c5:40:3b:0e:3e:8b:c0:ae:38:51:54:8a:2f:05:db:e3:e8:e4:24:ec",
|
||||||
|
ExternalTrustDomain: "cluster-id",
|
||||||
|
NotBefore: time.Date(2019, 10, 17, 11, 46, 29, 0, time.UTC),
|
||||||
|
NotAfter: time.Date(2029, 10, 17, 11, 46, 29, 0, time.UTC),
|
||||||
|
RootCert: readTestData(t, "cert-with-ec-256-key.pem"),
|
||||||
|
Active: true,
|
||||||
|
PrivateKeyType: "ec",
|
||||||
|
PrivateKeyBits: 256,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "type=ec bits=384",
|
||||||
|
pem: readTestData(t, "cert-with-ec-384-key.pem"),
|
||||||
|
expected: &structs.CARoot{
|
||||||
|
ID: "29:69:c4:0f:aa:8f:bd:07:31:0d:51:3b:45:62:3d:c0:b2:fc:c6:3f",
|
||||||
|
Name: "Provider-Name CA Primary Cert",
|
||||||
|
SerialNumber: 2935109425518279965,
|
||||||
|
SigningKeyID: "0b:a0:88:9b:dc:95:31:51:2e:3d:d4:f9:42:d0:6a:a0:62:46:82:d2:7c:22:e7:29:a9:aa:e8:a5:8c:cf:c7:42",
|
||||||
|
ExternalTrustDomain: "cluster-id",
|
||||||
|
NotBefore: time.Date(2019, 10, 17, 11, 55, 18, 0, time.UTC),
|
||||||
|
NotAfter: time.Date(2029, 10, 17, 11, 55, 18, 0, time.UTC),
|
||||||
|
RootCert: readTestData(t, "cert-with-ec-384-key.pem"),
|
||||||
|
Active: true,
|
||||||
|
PrivateKeyType: "ec",
|
||||||
|
PrivateKeyBits: 384,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "type=rsa bits=4096",
|
||||||
|
pem: readTestData(t, "cert-with-rsa-4096-key.pem"),
|
||||||
|
expected: &structs.CARoot{
|
||||||
|
ID: "3a:6a:e3:e2:2d:44:85:5a:e9:44:3b:ef:d2:90:78:83:7f:61:a2:84",
|
||||||
|
Name: "Provider-Name CA Primary Cert",
|
||||||
|
SerialNumber: 5186695743100577491,
|
||||||
|
SigningKeyID: "92:fa:cc:97:57:1e:31:84:a2:33:dd:9b:6a:a8:7c:fc:be:e2:94:ca:ac:b3:33:17:39:3b:b8:67:9b:dc:c1:08",
|
||||||
|
ExternalTrustDomain: "cluster-id",
|
||||||
|
NotBefore: time.Date(2019, 10, 17, 11, 53, 15, 0, time.UTC),
|
||||||
|
NotAfter: time.Date(2029, 10, 17, 11, 53, 15, 0, time.UTC),
|
||||||
|
RootCert: readTestData(t, "cert-with-rsa-4096-key.pem"),
|
||||||
|
Active: true,
|
||||||
|
PrivateKeyType: "rsa",
|
||||||
|
PrivateKeyBits: 4096,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "two certs in pem",
|
||||||
|
pem: readTestData(t, "pem-with-two-certs.pem"),
|
||||||
|
expected: &structs.CARoot{
|
||||||
|
ID: "42:43:10:1f:71:6b:21:21:d1:10:49:d1:f0:41:78:8c:0a:77:ef:c0",
|
||||||
|
Name: "Provider-Name CA Primary Cert",
|
||||||
|
SerialNumber: 17692800288680335732,
|
||||||
|
SigningKeyID: "9d:5c:27:43:ce:58:7b:ca:3e:7d:c4:fb:b6:2e:b7:13:e9:a1:68:3e",
|
||||||
|
ExternalTrustDomain: "cluster-id",
|
||||||
|
NotBefore: time.Date(2022, 1, 5, 23, 22, 12, 0, time.UTC),
|
||||||
|
NotAfter: time.Date(2022, 4, 7, 15, 22, 42, 0, time.UTC),
|
||||||
|
RootCert: readTestData(t, "pem-with-two-certs.pem"),
|
||||||
|
Active: true,
|
||||||
|
PrivateKeyType: "ec",
|
||||||
|
PrivateKeyBits: 256,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "three certs in pem",
|
||||||
|
pem: readTestData(t, "pem-with-three-certs.pem"),
|
||||||
|
expected: &structs.CARoot{
|
||||||
|
ID: "42:43:10:1f:71:6b:21:21:d1:10:49:d1:f0:41:78:8c:0a:77:ef:c0",
|
||||||
|
Name: "Provider-Name CA Primary Cert",
|
||||||
|
SerialNumber: 17692800288680335732,
|
||||||
|
SigningKeyID: "9d:5c:27:43:ce:58:7b:ca:3e:7d:c4:fb:b6:2e:b7:13:e9:a1:68:3e",
|
||||||
|
ExternalTrustDomain: "cluster-id",
|
||||||
|
NotBefore: time.Date(2022, 1, 5, 23, 22, 12, 0, time.UTC),
|
||||||
|
NotAfter: time.Date(2022, 4, 7, 15, 22, 42, 0, time.UTC),
|
||||||
|
RootCert: readTestData(t, "pem-with-three-certs.pem"),
|
||||||
|
Active: true,
|
||||||
|
PrivateKeyType: "ec",
|
||||||
|
PrivateKeyBits: 256,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, tc := range testCases {
|
||||||
|
t.Run(tc.name, func(t *testing.T) {
|
||||||
|
run(t, tc)
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -9,7 +9,7 @@ import (
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
|
|
||||||
"github.com/hashicorp/consul/agent/consul/state"
|
"github.com/hashicorp/consul/agent/configentry"
|
||||||
"github.com/hashicorp/consul/agent/structs"
|
"github.com/hashicorp/consul/agent/structs"
|
||||||
tokenStore "github.com/hashicorp/consul/agent/token"
|
tokenStore "github.com/hashicorp/consul/agent/token"
|
||||||
"github.com/hashicorp/consul/sdk/testutil/retry"
|
"github.com/hashicorp/consul/sdk/testutil/retry"
|
||||||
|
@ -535,17 +535,17 @@ func TestLeader_LegacyIntentionMigration(t *testing.T) {
|
||||||
checkIntentions(t, s1, true, map[string]*structs.Intention{})
|
checkIntentions(t, s1, true, map[string]*structs.Intention{})
|
||||||
}))
|
}))
|
||||||
|
|
||||||
mapifyConfigs := func(entries interface{}) map[state.ConfigEntryKindName]*structs.ServiceIntentionsConfigEntry {
|
mapifyConfigs := func(entries interface{}) map[configentry.KindName]*structs.ServiceIntentionsConfigEntry {
|
||||||
m := make(map[state.ConfigEntryKindName]*structs.ServiceIntentionsConfigEntry)
|
m := make(map[configentry.KindName]*structs.ServiceIntentionsConfigEntry)
|
||||||
switch v := entries.(type) {
|
switch v := entries.(type) {
|
||||||
case []*structs.ServiceIntentionsConfigEntry:
|
case []*structs.ServiceIntentionsConfigEntry:
|
||||||
for _, entry := range v {
|
for _, entry := range v {
|
||||||
kn := state.NewConfigEntryKindName(entry.Kind, entry.Name, &entry.EnterpriseMeta)
|
kn := configentry.NewKindName(entry.Kind, entry.Name, &entry.EnterpriseMeta)
|
||||||
m[kn] = entry
|
m[kn] = entry
|
||||||
}
|
}
|
||||||
case []structs.ConfigEntry:
|
case []structs.ConfigEntry:
|
||||||
for _, entry := range v {
|
for _, entry := range v {
|
||||||
kn := state.NewConfigEntryKindName(entry.GetKind(), entry.GetName(), entry.GetEnterpriseMeta())
|
kn := configentry.NewKindName(entry.GetKind(), entry.GetName(), entry.GetEnterpriseMeta())
|
||||||
m[kn] = entry.(*structs.ServiceIntentionsConfigEntry)
|
m[kn] = entry.(*structs.ServiceIntentionsConfigEntry)
|
||||||
}
|
}
|
||||||
default:
|
default:
|
||||||
|
|
|
@ -945,7 +945,17 @@ type blockingQueryResponseMeta interface {
|
||||||
//
|
//
|
||||||
// The query function is expected to be a closure that has access to responseMeta
|
// The query function is expected to be a closure that has access to responseMeta
|
||||||
// so that it can set the Index. The actual result of the query is opaque to blockingQuery.
|
// so that it can set the Index. The actual result of the query is opaque to blockingQuery.
|
||||||
// If query function returns an error, the error is returned to the caller immediately.
|
//
|
||||||
|
// The query function can return errNotFound, which is a sentinel error. Returning
|
||||||
|
// errNotFound indicates that the query found no results, which allows
|
||||||
|
// blockingQuery to keep blocking until the query returns a non-nil error.
|
||||||
|
// The query function must take care to set the actual result of the query to
|
||||||
|
// nil in these cases, otherwise when blockingQuery times out it may return
|
||||||
|
// a previous result. errNotFound will never be returned to the caller, it is
|
||||||
|
// converted to nil before returning.
|
||||||
|
//
|
||||||
|
// If query function returns any other error, the error is returned to the caller
|
||||||
|
// immediately.
|
||||||
//
|
//
|
||||||
// The query function must follow these rules:
|
// The query function must follow these rules:
|
||||||
//
|
//
|
||||||
|
@ -983,6 +993,9 @@ func (s *Server) blockingQuery(
|
||||||
var ws memdb.WatchSet
|
var ws memdb.WatchSet
|
||||||
err := query(ws, s.fsm.State())
|
err := query(ws, s.fsm.State())
|
||||||
s.setQueryMeta(responseMeta, opts.GetToken())
|
s.setQueryMeta(responseMeta, opts.GetToken())
|
||||||
|
if errors.Is(err, errNotFound) {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -995,6 +1008,8 @@ func (s *Server) blockingQuery(
|
||||||
// decrement the count when the function returns.
|
// decrement the count when the function returns.
|
||||||
defer atomic.AddUint64(&s.queriesBlocking, ^uint64(0))
|
defer atomic.AddUint64(&s.queriesBlocking, ^uint64(0))
|
||||||
|
|
||||||
|
var notFound bool
|
||||||
|
|
||||||
for {
|
for {
|
||||||
if opts.GetRequireConsistent() {
|
if opts.GetRequireConsistent() {
|
||||||
if err := s.consistentRead(); err != nil {
|
if err := s.consistentRead(); err != nil {
|
||||||
|
@ -1014,7 +1029,15 @@ func (s *Server) blockingQuery(
|
||||||
|
|
||||||
err := query(ws, state)
|
err := query(ws, state)
|
||||||
s.setQueryMeta(responseMeta, opts.GetToken())
|
s.setQueryMeta(responseMeta, opts.GetToken())
|
||||||
if err != nil {
|
switch {
|
||||||
|
case errors.Is(err, errNotFound):
|
||||||
|
if notFound {
|
||||||
|
// query result has not changed
|
||||||
|
minQueryIndex = responseMeta.GetIndex()
|
||||||
|
}
|
||||||
|
|
||||||
|
notFound = true
|
||||||
|
case err != nil:
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1037,6 +1060,8 @@ func (s *Server) blockingQuery(
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
var errNotFound = fmt.Errorf("no data found for query")
|
||||||
|
|
||||||
// setQueryMeta is used to populate the QueryMeta data for an RPC call
|
// setQueryMeta is used to populate the QueryMeta data for an RPC call
|
||||||
//
|
//
|
||||||
// Note: This method must be called *after* filtering query results with ACLs.
|
// Note: This method must be called *after* filtering query results with ACLs.
|
||||||
|
|
|
@ -227,11 +227,9 @@ func (m *MockSink) Close() error {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestRPC_blockingQuery(t *testing.T) {
|
func TestServer_blockingQuery(t *testing.T) {
|
||||||
t.Parallel()
|
t.Parallel()
|
||||||
dir, s := testServer(t)
|
_, s := testServerWithConfig(t)
|
||||||
defer os.RemoveAll(dir)
|
|
||||||
defer s.Shutdown()
|
|
||||||
|
|
||||||
// Perform a non-blocking query. Note that it's significant that the meta has
|
// Perform a non-blocking query. Note that it's significant that the meta has
|
||||||
// a zero index in response - the implied opts.MinQueryIndex is also zero but
|
// a zero index in response - the implied opts.MinQueryIndex is also zero but
|
||||||
|
@ -391,6 +389,93 @@ func TestRPC_blockingQuery(t *testing.T) {
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.True(t, meta.ResultsFilteredByACLs, "ResultsFilteredByACLs should be honored for authenticated calls")
|
require.True(t, meta.ResultsFilteredByACLs, "ResultsFilteredByACLs should be honored for authenticated calls")
|
||||||
})
|
})
|
||||||
|
|
||||||
|
t.Run("non-blocking query for item that does not exist", func(t *testing.T) {
|
||||||
|
opts := structs.QueryOptions{}
|
||||||
|
meta := structs.QueryMeta{}
|
||||||
|
calls := 0
|
||||||
|
fn := func(_ memdb.WatchSet, _ *state.Store) error {
|
||||||
|
calls++
|
||||||
|
return errNotFound
|
||||||
|
}
|
||||||
|
|
||||||
|
err := s.blockingQuery(&opts, &meta, fn)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, 1, calls)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("blocking query for item that does not exist", func(t *testing.T) {
|
||||||
|
opts := structs.QueryOptions{MinQueryIndex: 3, MaxQueryTime: 100 * time.Millisecond}
|
||||||
|
meta := structs.QueryMeta{}
|
||||||
|
calls := 0
|
||||||
|
fn := func(ws memdb.WatchSet, _ *state.Store) error {
|
||||||
|
calls++
|
||||||
|
if calls == 1 {
|
||||||
|
meta.Index = 3
|
||||||
|
|
||||||
|
ch := make(chan struct{})
|
||||||
|
close(ch)
|
||||||
|
ws.Add(ch)
|
||||||
|
return errNotFound
|
||||||
|
}
|
||||||
|
meta.Index = 5
|
||||||
|
return errNotFound
|
||||||
|
}
|
||||||
|
|
||||||
|
err := s.blockingQuery(&opts, &meta, fn)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, 2, calls)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("blocking query for item that existed and is removed", func(t *testing.T) {
|
||||||
|
opts := structs.QueryOptions{MinQueryIndex: 3, MaxQueryTime: 100 * time.Millisecond}
|
||||||
|
meta := structs.QueryMeta{}
|
||||||
|
calls := 0
|
||||||
|
fn := func(ws memdb.WatchSet, _ *state.Store) error {
|
||||||
|
calls++
|
||||||
|
if calls == 1 {
|
||||||
|
meta.Index = 3
|
||||||
|
|
||||||
|
ch := make(chan struct{})
|
||||||
|
close(ch)
|
||||||
|
ws.Add(ch)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
meta.Index = 5
|
||||||
|
return errNotFound
|
||||||
|
}
|
||||||
|
|
||||||
|
start := time.Now()
|
||||||
|
err := s.blockingQuery(&opts, &meta, fn)
|
||||||
|
require.True(t, time.Since(start) < opts.MaxQueryTime, "query timed out")
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, 2, calls)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("blocking query for non-existent item that is created", func(t *testing.T) {
|
||||||
|
opts := structs.QueryOptions{MinQueryIndex: 3, MaxQueryTime: 100 * time.Millisecond}
|
||||||
|
meta := structs.QueryMeta{}
|
||||||
|
calls := 0
|
||||||
|
fn := func(ws memdb.WatchSet, _ *state.Store) error {
|
||||||
|
calls++
|
||||||
|
if calls == 1 {
|
||||||
|
meta.Index = 3
|
||||||
|
|
||||||
|
ch := make(chan struct{})
|
||||||
|
close(ch)
|
||||||
|
ws.Add(ch)
|
||||||
|
return errNotFound
|
||||||
|
}
|
||||||
|
meta.Index = 5
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
start := time.Now()
|
||||||
|
err := s.blockingQuery(&opts, &meta, fn)
|
||||||
|
require.True(t, time.Since(start) < opts.MaxQueryTime, "query timed out")
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, 2, calls)
|
||||||
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestRPC_ReadyForConsistentReads(t *testing.T) {
|
func TestRPC_ReadyForConsistentReads(t *testing.T) {
|
||||||
|
|
|
@ -198,6 +198,7 @@ func (s *Session) Get(args *structs.SessionSpecificRequest,
|
||||||
reply.Sessions = structs.Sessions{session}
|
reply.Sessions = structs.Sessions{session}
|
||||||
} else {
|
} else {
|
||||||
reply.Sessions = nil
|
reply.Sessions = nil
|
||||||
|
return errNotFound
|
||||||
}
|
}
|
||||||
s.srv.filterACLWithAuthorizer(authz, reply)
|
s.srv.filterACLWithAuthorizer(authz, reply)
|
||||||
return nil
|
return nil
|
||||||
|
|
|
@ -3461,6 +3461,7 @@ func (s *Store) ServiceTopology(
|
||||||
err error
|
err error
|
||||||
fullyTransparent bool
|
fullyTransparent bool
|
||||||
hasTransparent bool
|
hasTransparent bool
|
||||||
|
connectNative bool
|
||||||
)
|
)
|
||||||
switch kind {
|
switch kind {
|
||||||
case structs.ServiceKindIngressGateway:
|
case structs.ServiceKindIngressGateway:
|
||||||
|
@ -3505,6 +3506,9 @@ func (s *Store) ServiceTopology(
|
||||||
// transparent proxy mode. If ANY instance isn't in the right mode then the warming applies.
|
// transparent proxy mode. If ANY instance isn't in the right mode then the warming applies.
|
||||||
fullyTransparent = false
|
fullyTransparent = false
|
||||||
}
|
}
|
||||||
|
if proxy.ServiceConnect.Native {
|
||||||
|
connectNative = true
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
default:
|
default:
|
||||||
|
@ -3526,8 +3530,8 @@ func (s *Store) ServiceTopology(
|
||||||
|
|
||||||
upstreamDecisions := make(map[string]structs.IntentionDecisionSummary)
|
upstreamDecisions := make(map[string]structs.IntentionDecisionSummary)
|
||||||
|
|
||||||
// Only transparent proxies have upstreams from intentions
|
// Only transparent proxies / connect native services have upstreams from intentions
|
||||||
if hasTransparent {
|
if hasTransparent || connectNative {
|
||||||
idx, intentionUpstreams, err := s.intentionTopologyTxn(tx, ws, sn, false, defaultAllow)
|
idx, intentionUpstreams, err := s.intentionTopologyTxn(tx, ws, sn, false, defaultAllow)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return 0, nil, err
|
return 0, nil, err
|
||||||
|
@ -3607,8 +3611,8 @@ func (s *Store) ServiceTopology(
|
||||||
sn = structs.NewServiceName(upstream.Service.Proxy.DestinationServiceName, &upstream.Service.EnterpriseMeta)
|
sn = structs.NewServiceName(upstream.Service.Proxy.DestinationServiceName, &upstream.Service.EnterpriseMeta)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Avoid returning upstreams from intentions when none of the proxy instances of the target are in transparent mode.
|
// Avoid returning upstreams from intentions when none of the proxy instances of the target are in transparent mode or connect native.
|
||||||
if !hasTransparent && upstreamSources[sn.String()] != structs.TopologySourceRegistration {
|
if !hasTransparent && !connectNative && upstreamSources[sn.String()] != structs.TopologySourceRegistration {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
upstreams = append(upstreams, upstream)
|
upstreams = append(upstreams, upstream)
|
||||||
|
@ -3711,6 +3715,7 @@ func (s *Store) ServiceTopology(
|
||||||
}
|
}
|
||||||
|
|
||||||
idx, unfilteredDownstreams, err := s.combinedServiceNodesTxn(tx, ws, downstreamNames)
|
idx, unfilteredDownstreams, err := s.combinedServiceNodesTxn(tx, ws, downstreamNames)
|
||||||
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return 0, nil, fmt.Errorf("failed to get downstreams for %q: %v", sn.String(), err)
|
return 0, nil, fmt.Errorf("failed to get downstreams for %q: %v", sn.String(), err)
|
||||||
}
|
}
|
||||||
|
@ -3734,8 +3739,8 @@ func (s *Store) ServiceTopology(
|
||||||
if downstream.Service.Kind == structs.ServiceKindConnectProxy {
|
if downstream.Service.Kind == structs.ServiceKindConnectProxy {
|
||||||
sn = structs.NewServiceName(downstream.Service.Proxy.DestinationServiceName, &downstream.Service.EnterpriseMeta)
|
sn = structs.NewServiceName(downstream.Service.Proxy.DestinationServiceName, &downstream.Service.EnterpriseMeta)
|
||||||
}
|
}
|
||||||
if _, ok := tproxyMap[sn]; !ok && downstreamSources[sn.String()] != structs.TopologySourceRegistration {
|
if _, ok := tproxyMap[sn]; !ok && !downstream.Service.Connect.Native && downstreamSources[sn.String()] != structs.TopologySourceRegistration {
|
||||||
// If downstream is not a transparent proxy, remove references
|
// If downstream is not a transparent proxy or connect native, remove references
|
||||||
delete(downstreamSources, sn.String())
|
delete(downstreamSources, sn.String())
|
||||||
delete(downstreamDecisions, sn.String())
|
delete(downstreamDecisions, sn.String())
|
||||||
continue
|
continue
|
||||||
|
|
|
@ -6,6 +6,7 @@ import (
|
||||||
|
|
||||||
memdb "github.com/hashicorp/go-memdb"
|
memdb "github.com/hashicorp/go-memdb"
|
||||||
|
|
||||||
|
"github.com/hashicorp/consul/agent/configentry"
|
||||||
"github.com/hashicorp/consul/agent/connect"
|
"github.com/hashicorp/consul/agent/connect"
|
||||||
"github.com/hashicorp/consul/agent/consul/discoverychain"
|
"github.com/hashicorp/consul/agent/consul/discoverychain"
|
||||||
"github.com/hashicorp/consul/agent/structs"
|
"github.com/hashicorp/consul/agent/structs"
|
||||||
|
@ -105,7 +106,7 @@ func configEntryTxn(tx ReadTxn, ws memdb.WatchSet, kind, name string, entMeta *s
|
||||||
idx := maxIndexTxn(tx, tableConfigEntries)
|
idx := maxIndexTxn(tx, tableConfigEntries)
|
||||||
|
|
||||||
// Get the existing config entry.
|
// Get the existing config entry.
|
||||||
watchCh, existing, err := tx.FirstWatch(tableConfigEntries, "id", NewConfigEntryKindName(kind, name, entMeta))
|
watchCh, existing, err := tx.FirstWatch(tableConfigEntries, "id", configentry.NewKindName(kind, name, entMeta))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return 0, nil, fmt.Errorf("failed config entry lookup: %s", err)
|
return 0, nil, fmt.Errorf("failed config entry lookup: %s", err)
|
||||||
}
|
}
|
||||||
|
@ -290,7 +291,7 @@ func (s *Store) DeleteConfigEntry(idx uint64, kind, name string, entMeta *struct
|
||||||
|
|
||||||
// TODO: accept structs.ConfigEntry instead of individual fields
|
// TODO: accept structs.ConfigEntry instead of individual fields
|
||||||
func deleteConfigEntryTxn(tx WriteTxn, idx uint64, kind, name string, entMeta *structs.EnterpriseMeta) error {
|
func deleteConfigEntryTxn(tx WriteTxn, idx uint64, kind, name string, entMeta *structs.EnterpriseMeta) error {
|
||||||
q := NewConfigEntryKindName(kind, name, entMeta)
|
q := configentry.NewKindName(kind, name, entMeta)
|
||||||
existing, err := tx.First(tableConfigEntries, indexID, q)
|
existing, err := tx.First(tableConfigEntries, indexID, q)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("failed config entry lookup: %s", err)
|
return fmt.Errorf("failed config entry lookup: %s", err)
|
||||||
|
@ -370,7 +371,7 @@ func insertConfigEntryWithTxn(tx WriteTxn, idx uint64, conf structs.ConfigEntry)
|
||||||
// to the caller that they can correct.
|
// to the caller that they can correct.
|
||||||
func validateProposedConfigEntryInGraph(
|
func validateProposedConfigEntryInGraph(
|
||||||
tx ReadTxn,
|
tx ReadTxn,
|
||||||
kindName ConfigEntryKindName,
|
kindName configentry.KindName,
|
||||||
newEntry structs.ConfigEntry,
|
newEntry structs.ConfigEntry,
|
||||||
) error {
|
) error {
|
||||||
switch kindName.Kind {
|
switch kindName.Kind {
|
||||||
|
@ -403,7 +404,7 @@ func validateProposedConfigEntryInGraph(
|
||||||
return validateProposedConfigEntryInServiceGraph(tx, kindName, newEntry)
|
return validateProposedConfigEntryInServiceGraph(tx, kindName, newEntry)
|
||||||
}
|
}
|
||||||
|
|
||||||
func checkGatewayClash(tx ReadTxn, kindName ConfigEntryKindName, otherKind string) error {
|
func checkGatewayClash(tx ReadTxn, kindName configentry.KindName, otherKind string) error {
|
||||||
_, entry, err := configEntryTxn(tx, nil, otherKind, kindName.Name, &kindName.EnterpriseMeta)
|
_, entry, err := configEntryTxn(tx, nil, otherKind, kindName.Name, &kindName.EnterpriseMeta)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
|
@ -513,7 +514,7 @@ func (s *Store) discoveryChainSourcesTxn(tx ReadTxn, ws memdb.WatchSet, dc strin
|
||||||
|
|
||||||
func validateProposedConfigEntryInServiceGraph(
|
func validateProposedConfigEntryInServiceGraph(
|
||||||
tx ReadTxn,
|
tx ReadTxn,
|
||||||
kindName ConfigEntryKindName,
|
kindName configentry.KindName,
|
||||||
newEntry structs.ConfigEntry,
|
newEntry structs.ConfigEntry,
|
||||||
) error {
|
) error {
|
||||||
// Collect all of the chains that could be affected by this change
|
// Collect all of the chains that could be affected by this change
|
||||||
|
@ -658,7 +659,7 @@ func validateProposedConfigEntryInServiceGraph(
|
||||||
checkChains[sn.ToServiceID()] = struct{}{}
|
checkChains[sn.ToServiceID()] = struct{}{}
|
||||||
}
|
}
|
||||||
|
|
||||||
overrides := map[ConfigEntryKindName]structs.ConfigEntry{
|
overrides := map[configentry.KindName]structs.ConfigEntry{
|
||||||
kindName: newEntry,
|
kindName: newEntry,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -738,7 +739,7 @@ func validateProposedConfigEntryInServiceGraph(
|
||||||
func testCompileDiscoveryChain(
|
func testCompileDiscoveryChain(
|
||||||
tx ReadTxn,
|
tx ReadTxn,
|
||||||
chainName string,
|
chainName string,
|
||||||
overrides map[ConfigEntryKindName]structs.ConfigEntry,
|
overrides map[configentry.KindName]structs.ConfigEntry,
|
||||||
entMeta *structs.EnterpriseMeta,
|
entMeta *structs.EnterpriseMeta,
|
||||||
) (string, *structs.DiscoveryGraphNode, error) {
|
) (string, *structs.DiscoveryGraphNode, error) {
|
||||||
_, speculativeEntries, err := readDiscoveryChainConfigEntriesTxn(tx, nil, chainName, overrides, entMeta)
|
_, speculativeEntries, err := readDiscoveryChainConfigEntriesTxn(tx, nil, chainName, overrides, entMeta)
|
||||||
|
@ -827,7 +828,7 @@ func (s *Store) ReadDiscoveryChainConfigEntries(
|
||||||
ws memdb.WatchSet,
|
ws memdb.WatchSet,
|
||||||
serviceName string,
|
serviceName string,
|
||||||
entMeta *structs.EnterpriseMeta,
|
entMeta *structs.EnterpriseMeta,
|
||||||
) (uint64, *structs.DiscoveryChainConfigEntries, error) {
|
) (uint64, *configentry.DiscoveryChainSet, error) {
|
||||||
return s.readDiscoveryChainConfigEntries(ws, serviceName, nil, entMeta)
|
return s.readDiscoveryChainConfigEntries(ws, serviceName, nil, entMeta)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -844,9 +845,9 @@ func (s *Store) ReadDiscoveryChainConfigEntries(
|
||||||
func (s *Store) readDiscoveryChainConfigEntries(
|
func (s *Store) readDiscoveryChainConfigEntries(
|
||||||
ws memdb.WatchSet,
|
ws memdb.WatchSet,
|
||||||
serviceName string,
|
serviceName string,
|
||||||
overrides map[ConfigEntryKindName]structs.ConfigEntry,
|
overrides map[configentry.KindName]structs.ConfigEntry,
|
||||||
entMeta *structs.EnterpriseMeta,
|
entMeta *structs.EnterpriseMeta,
|
||||||
) (uint64, *structs.DiscoveryChainConfigEntries, error) {
|
) (uint64, *configentry.DiscoveryChainSet, error) {
|
||||||
tx := s.db.Txn(false)
|
tx := s.db.Txn(false)
|
||||||
defer tx.Abort()
|
defer tx.Abort()
|
||||||
return readDiscoveryChainConfigEntriesTxn(tx, ws, serviceName, overrides, entMeta)
|
return readDiscoveryChainConfigEntriesTxn(tx, ws, serviceName, overrides, entMeta)
|
||||||
|
@ -856,10 +857,10 @@ func readDiscoveryChainConfigEntriesTxn(
|
||||||
tx ReadTxn,
|
tx ReadTxn,
|
||||||
ws memdb.WatchSet,
|
ws memdb.WatchSet,
|
||||||
serviceName string,
|
serviceName string,
|
||||||
overrides map[ConfigEntryKindName]structs.ConfigEntry,
|
overrides map[configentry.KindName]structs.ConfigEntry,
|
||||||
entMeta *structs.EnterpriseMeta,
|
entMeta *structs.EnterpriseMeta,
|
||||||
) (uint64, *structs.DiscoveryChainConfigEntries, error) {
|
) (uint64, *configentry.DiscoveryChainSet, error) {
|
||||||
res := structs.NewDiscoveryChainConfigEntries()
|
res := configentry.NewDiscoveryChainSet()
|
||||||
|
|
||||||
// Note that below we always look up splitters and resolvers in pairs, even
|
// Note that below we always look up splitters and resolvers in pairs, even
|
||||||
// in some circumstances where both are not strictly necessary.
|
// in some circumstances where both are not strictly necessary.
|
||||||
|
@ -1063,7 +1064,7 @@ func getProxyConfigEntryTxn(
|
||||||
tx ReadTxn,
|
tx ReadTxn,
|
||||||
ws memdb.WatchSet,
|
ws memdb.WatchSet,
|
||||||
name string,
|
name string,
|
||||||
overrides map[ConfigEntryKindName]structs.ConfigEntry,
|
overrides map[configentry.KindName]structs.ConfigEntry,
|
||||||
entMeta *structs.EnterpriseMeta,
|
entMeta *structs.EnterpriseMeta,
|
||||||
) (uint64, *structs.ProxyConfigEntry, error) {
|
) (uint64, *structs.ProxyConfigEntry, error) {
|
||||||
idx, entry, err := configEntryWithOverridesTxn(tx, ws, structs.ProxyDefaults, name, overrides, entMeta)
|
idx, entry, err := configEntryWithOverridesTxn(tx, ws, structs.ProxyDefaults, name, overrides, entMeta)
|
||||||
|
@ -1088,7 +1089,7 @@ func getServiceConfigEntryTxn(
|
||||||
tx ReadTxn,
|
tx ReadTxn,
|
||||||
ws memdb.WatchSet,
|
ws memdb.WatchSet,
|
||||||
serviceName string,
|
serviceName string,
|
||||||
overrides map[ConfigEntryKindName]structs.ConfigEntry,
|
overrides map[configentry.KindName]structs.ConfigEntry,
|
||||||
entMeta *structs.EnterpriseMeta,
|
entMeta *structs.EnterpriseMeta,
|
||||||
) (uint64, *structs.ServiceConfigEntry, error) {
|
) (uint64, *structs.ServiceConfigEntry, error) {
|
||||||
idx, entry, err := configEntryWithOverridesTxn(tx, ws, structs.ServiceDefaults, serviceName, overrides, entMeta)
|
idx, entry, err := configEntryWithOverridesTxn(tx, ws, structs.ServiceDefaults, serviceName, overrides, entMeta)
|
||||||
|
@ -1113,7 +1114,7 @@ func getRouterConfigEntryTxn(
|
||||||
tx ReadTxn,
|
tx ReadTxn,
|
||||||
ws memdb.WatchSet,
|
ws memdb.WatchSet,
|
||||||
serviceName string,
|
serviceName string,
|
||||||
overrides map[ConfigEntryKindName]structs.ConfigEntry,
|
overrides map[configentry.KindName]structs.ConfigEntry,
|
||||||
entMeta *structs.EnterpriseMeta,
|
entMeta *structs.EnterpriseMeta,
|
||||||
) (uint64, *structs.ServiceRouterConfigEntry, error) {
|
) (uint64, *structs.ServiceRouterConfigEntry, error) {
|
||||||
idx, entry, err := configEntryWithOverridesTxn(tx, ws, structs.ServiceRouter, serviceName, overrides, entMeta)
|
idx, entry, err := configEntryWithOverridesTxn(tx, ws, structs.ServiceRouter, serviceName, overrides, entMeta)
|
||||||
|
@ -1138,7 +1139,7 @@ func getSplitterConfigEntryTxn(
|
||||||
tx ReadTxn,
|
tx ReadTxn,
|
||||||
ws memdb.WatchSet,
|
ws memdb.WatchSet,
|
||||||
serviceName string,
|
serviceName string,
|
||||||
overrides map[ConfigEntryKindName]structs.ConfigEntry,
|
overrides map[configentry.KindName]structs.ConfigEntry,
|
||||||
entMeta *structs.EnterpriseMeta,
|
entMeta *structs.EnterpriseMeta,
|
||||||
) (uint64, *structs.ServiceSplitterConfigEntry, error) {
|
) (uint64, *structs.ServiceSplitterConfigEntry, error) {
|
||||||
idx, entry, err := configEntryWithOverridesTxn(tx, ws, structs.ServiceSplitter, serviceName, overrides, entMeta)
|
idx, entry, err := configEntryWithOverridesTxn(tx, ws, structs.ServiceSplitter, serviceName, overrides, entMeta)
|
||||||
|
@ -1163,7 +1164,7 @@ func getResolverConfigEntryTxn(
|
||||||
tx ReadTxn,
|
tx ReadTxn,
|
||||||
ws memdb.WatchSet,
|
ws memdb.WatchSet,
|
||||||
serviceName string,
|
serviceName string,
|
||||||
overrides map[ConfigEntryKindName]structs.ConfigEntry,
|
overrides map[configentry.KindName]structs.ConfigEntry,
|
||||||
entMeta *structs.EnterpriseMeta,
|
entMeta *structs.EnterpriseMeta,
|
||||||
) (uint64, *structs.ServiceResolverConfigEntry, error) {
|
) (uint64, *structs.ServiceResolverConfigEntry, error) {
|
||||||
idx, entry, err := configEntryWithOverridesTxn(tx, ws, structs.ServiceResolver, serviceName, overrides, entMeta)
|
idx, entry, err := configEntryWithOverridesTxn(tx, ws, structs.ServiceResolver, serviceName, overrides, entMeta)
|
||||||
|
@ -1188,7 +1189,7 @@ func getServiceIntentionsConfigEntryTxn(
|
||||||
tx ReadTxn,
|
tx ReadTxn,
|
||||||
ws memdb.WatchSet,
|
ws memdb.WatchSet,
|
||||||
name string,
|
name string,
|
||||||
overrides map[ConfigEntryKindName]structs.ConfigEntry,
|
overrides map[configentry.KindName]structs.ConfigEntry,
|
||||||
entMeta *structs.EnterpriseMeta,
|
entMeta *structs.EnterpriseMeta,
|
||||||
) (uint64, *structs.ServiceIntentionsConfigEntry, error) {
|
) (uint64, *structs.ServiceIntentionsConfigEntry, error) {
|
||||||
idx, entry, err := configEntryWithOverridesTxn(tx, ws, structs.ServiceIntentions, name, overrides, entMeta)
|
idx, entry, err := configEntryWithOverridesTxn(tx, ws, structs.ServiceIntentions, name, overrides, entMeta)
|
||||||
|
@ -1210,11 +1211,11 @@ func configEntryWithOverridesTxn(
|
||||||
ws memdb.WatchSet,
|
ws memdb.WatchSet,
|
||||||
kind string,
|
kind string,
|
||||||
name string,
|
name string,
|
||||||
overrides map[ConfigEntryKindName]structs.ConfigEntry,
|
overrides map[configentry.KindName]structs.ConfigEntry,
|
||||||
entMeta *structs.EnterpriseMeta,
|
entMeta *structs.EnterpriseMeta,
|
||||||
) (uint64, structs.ConfigEntry, error) {
|
) (uint64, structs.ConfigEntry, error) {
|
||||||
if len(overrides) > 0 {
|
if len(overrides) > 0 {
|
||||||
kn := NewConfigEntryKindName(kind, name, entMeta)
|
kn := configentry.NewKindName(kind, name, entMeta)
|
||||||
kn.Normalize()
|
kn.Normalize()
|
||||||
entry, ok := overrides[kn]
|
entry, ok := overrides[kn]
|
||||||
if ok {
|
if ok {
|
||||||
|
@ -1244,7 +1245,7 @@ func protocolForService(
|
||||||
}
|
}
|
||||||
maxIdx = lib.MaxUint64(maxIdx, idx)
|
maxIdx = lib.MaxUint64(maxIdx, idx)
|
||||||
|
|
||||||
entries := structs.NewDiscoveryChainConfigEntries()
|
entries := configentry.NewDiscoveryChainSet()
|
||||||
if proxyConfig != nil {
|
if proxyConfig != nil {
|
||||||
entries.AddEntries(proxyConfig)
|
entries.AddEntries(proxyConfig)
|
||||||
}
|
}
|
||||||
|
@ -1267,37 +1268,8 @@ func protocolForService(
|
||||||
return maxIdx, chain.Protocol, nil
|
return maxIdx, chain.Protocol, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// ConfigEntryKindName is a value type useful for maps. You can use:
|
func newConfigEntryQuery(c structs.ConfigEntry) configentry.KindName {
|
||||||
// map[ConfigEntryKindName]Payload
|
return configentry.NewKindName(c.GetKind(), c.GetName(), c.GetEnterpriseMeta())
|
||||||
// instead of:
|
|
||||||
// map[string]map[string]Payload
|
|
||||||
type ConfigEntryKindName struct {
|
|
||||||
Kind string
|
|
||||||
Name string
|
|
||||||
structs.EnterpriseMeta
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewConfigEntryKindName returns a new ConfigEntryKindName. The EnterpriseMeta
|
|
||||||
// values will be normalized based on the kind.
|
|
||||||
//
|
|
||||||
// Any caller which modifies the EnterpriseMeta field must call Normalize before
|
|
||||||
// persisting or using the value as a map key.
|
|
||||||
func NewConfigEntryKindName(kind, name string, entMeta *structs.EnterpriseMeta) ConfigEntryKindName {
|
|
||||||
ret := ConfigEntryKindName{
|
|
||||||
Kind: kind,
|
|
||||||
Name: name,
|
|
||||||
}
|
|
||||||
if entMeta == nil {
|
|
||||||
entMeta = structs.DefaultEnterpriseMetaInDefaultPartition()
|
|
||||||
}
|
|
||||||
|
|
||||||
ret.EnterpriseMeta = *entMeta
|
|
||||||
ret.Normalize()
|
|
||||||
return ret
|
|
||||||
}
|
|
||||||
|
|
||||||
func newConfigEntryQuery(c structs.ConfigEntry) ConfigEntryKindName {
|
|
||||||
return NewConfigEntryKindName(c.GetKind(), c.GetName(), c.GetEnterpriseMeta())
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// ConfigEntryKindQuery is used to lookup config entries by their kind.
|
// ConfigEntryKindQuery is used to lookup config entries by their kind.
|
||||||
|
|
|
@ -9,6 +9,7 @@ import (
|
||||||
|
|
||||||
memdb "github.com/hashicorp/go-memdb"
|
memdb "github.com/hashicorp/go-memdb"
|
||||||
|
|
||||||
|
"github.com/hashicorp/consul/agent/configentry"
|
||||||
"github.com/hashicorp/consul/agent/structs"
|
"github.com/hashicorp/consul/agent/structs"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -23,7 +24,7 @@ func indexFromConfigEntryKindName(arg interface{}) ([]byte, error) {
|
||||||
case ConfigEntryKindQuery:
|
case ConfigEntryKindQuery:
|
||||||
b.String(strings.ToLower(n.Kind))
|
b.String(strings.ToLower(n.Kind))
|
||||||
return b.Bytes(), nil
|
return b.Bytes(), nil
|
||||||
case ConfigEntryKindName:
|
case configentry.KindName:
|
||||||
b.String(strings.ToLower(n.Kind))
|
b.String(strings.ToLower(n.Kind))
|
||||||
b.String(strings.ToLower(n.Name))
|
b.String(strings.ToLower(n.Name))
|
||||||
return b.Bytes(), nil
|
return b.Bytes(), nil
|
||||||
|
|
|
@ -3,13 +3,16 @@
|
||||||
|
|
||||||
package state
|
package state
|
||||||
|
|
||||||
import "github.com/hashicorp/consul/agent/structs"
|
import (
|
||||||
|
"github.com/hashicorp/consul/agent/configentry"
|
||||||
|
"github.com/hashicorp/consul/agent/structs"
|
||||||
|
)
|
||||||
|
|
||||||
func testIndexerTableConfigEntries() map[string]indexerTestCase {
|
func testIndexerTableConfigEntries() map[string]indexerTestCase {
|
||||||
return map[string]indexerTestCase{
|
return map[string]indexerTestCase{
|
||||||
indexID: {
|
indexID: {
|
||||||
read: indexValue{
|
read: indexValue{
|
||||||
source: ConfigEntryKindName{
|
source: configentry.KindName{
|
||||||
Kind: "Proxy-Defaults",
|
Kind: "Proxy-Defaults",
|
||||||
Name: "NaMe",
|
Name: "NaMe",
|
||||||
},
|
},
|
||||||
|
|
|
@ -7,6 +7,7 @@ import (
|
||||||
memdb "github.com/hashicorp/go-memdb"
|
memdb "github.com/hashicorp/go-memdb"
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
|
|
||||||
|
"github.com/hashicorp/consul/agent/configentry"
|
||||||
"github.com/hashicorp/consul/agent/structs"
|
"github.com/hashicorp/consul/agent/structs"
|
||||||
"github.com/hashicorp/consul/sdk/testutil"
|
"github.com/hashicorp/consul/sdk/testutil"
|
||||||
)
|
)
|
||||||
|
@ -999,11 +1000,11 @@ func TestStore_ReadDiscoveryChainConfigEntries_Overrides(t *testing.T) {
|
||||||
for _, tc := range []struct {
|
for _, tc := range []struct {
|
||||||
name string
|
name string
|
||||||
entries []structs.ConfigEntry
|
entries []structs.ConfigEntry
|
||||||
expectBefore []ConfigEntryKindName
|
expectBefore []configentry.KindName
|
||||||
overrides map[ConfigEntryKindName]structs.ConfigEntry
|
overrides map[configentry.KindName]structs.ConfigEntry
|
||||||
expectAfter []ConfigEntryKindName
|
expectAfter []configentry.KindName
|
||||||
expectAfterErr string
|
expectAfterErr string
|
||||||
checkAfter func(t *testing.T, entrySet *structs.DiscoveryChainConfigEntries)
|
checkAfter func(t *testing.T, entrySet *configentry.DiscoveryChainSet)
|
||||||
}{
|
}{
|
||||||
{
|
{
|
||||||
name: "mask service-defaults",
|
name: "mask service-defaults",
|
||||||
|
@ -1014,13 +1015,13 @@ func TestStore_ReadDiscoveryChainConfigEntries_Overrides(t *testing.T) {
|
||||||
Protocol: "tcp",
|
Protocol: "tcp",
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
expectBefore: []ConfigEntryKindName{
|
expectBefore: []configentry.KindName{
|
||||||
NewConfigEntryKindName(structs.ServiceDefaults, "main", nil),
|
configentry.NewKindName(structs.ServiceDefaults, "main", nil),
|
||||||
},
|
},
|
||||||
overrides: map[ConfigEntryKindName]structs.ConfigEntry{
|
overrides: map[configentry.KindName]structs.ConfigEntry{
|
||||||
NewConfigEntryKindName(structs.ServiceDefaults, "main", nil): nil,
|
configentry.NewKindName(structs.ServiceDefaults, "main", nil): nil,
|
||||||
},
|
},
|
||||||
expectAfter: []ConfigEntryKindName{
|
expectAfter: []configentry.KindName{
|
||||||
// nothing
|
// nothing
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
@ -1033,20 +1034,20 @@ func TestStore_ReadDiscoveryChainConfigEntries_Overrides(t *testing.T) {
|
||||||
Protocol: "tcp",
|
Protocol: "tcp",
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
expectBefore: []ConfigEntryKindName{
|
expectBefore: []configentry.KindName{
|
||||||
NewConfigEntryKindName(structs.ServiceDefaults, "main", nil),
|
configentry.NewKindName(structs.ServiceDefaults, "main", nil),
|
||||||
},
|
},
|
||||||
overrides: map[ConfigEntryKindName]structs.ConfigEntry{
|
overrides: map[configentry.KindName]structs.ConfigEntry{
|
||||||
NewConfigEntryKindName(structs.ServiceDefaults, "main", nil): &structs.ServiceConfigEntry{
|
configentry.NewKindName(structs.ServiceDefaults, "main", nil): &structs.ServiceConfigEntry{
|
||||||
Kind: structs.ServiceDefaults,
|
Kind: structs.ServiceDefaults,
|
||||||
Name: "main",
|
Name: "main",
|
||||||
Protocol: "grpc",
|
Protocol: "grpc",
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
expectAfter: []ConfigEntryKindName{
|
expectAfter: []configentry.KindName{
|
||||||
NewConfigEntryKindName(structs.ServiceDefaults, "main", nil),
|
configentry.NewKindName(structs.ServiceDefaults, "main", nil),
|
||||||
},
|
},
|
||||||
checkAfter: func(t *testing.T, entrySet *structs.DiscoveryChainConfigEntries) {
|
checkAfter: func(t *testing.T, entrySet *configentry.DiscoveryChainSet) {
|
||||||
defaults := entrySet.GetService(structs.NewServiceID("main", nil))
|
defaults := entrySet.GetService(structs.NewServiceID("main", nil))
|
||||||
require.NotNil(t, defaults)
|
require.NotNil(t, defaults)
|
||||||
require.Equal(t, "grpc", defaults.Protocol)
|
require.Equal(t, "grpc", defaults.Protocol)
|
||||||
|
@ -1066,15 +1067,15 @@ func TestStore_ReadDiscoveryChainConfigEntries_Overrides(t *testing.T) {
|
||||||
Name: "main",
|
Name: "main",
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
expectBefore: []ConfigEntryKindName{
|
expectBefore: []configentry.KindName{
|
||||||
NewConfigEntryKindName(structs.ServiceDefaults, "main", nil),
|
configentry.NewKindName(structs.ServiceDefaults, "main", nil),
|
||||||
NewConfigEntryKindName(structs.ServiceRouter, "main", nil),
|
configentry.NewKindName(structs.ServiceRouter, "main", nil),
|
||||||
},
|
},
|
||||||
overrides: map[ConfigEntryKindName]structs.ConfigEntry{
|
overrides: map[configentry.KindName]structs.ConfigEntry{
|
||||||
NewConfigEntryKindName(structs.ServiceRouter, "main", nil): nil,
|
configentry.NewKindName(structs.ServiceRouter, "main", nil): nil,
|
||||||
},
|
},
|
||||||
expectAfter: []ConfigEntryKindName{
|
expectAfter: []configentry.KindName{
|
||||||
NewConfigEntryKindName(structs.ServiceDefaults, "main", nil),
|
configentry.NewKindName(structs.ServiceDefaults, "main", nil),
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
@ -1111,13 +1112,13 @@ func TestStore_ReadDiscoveryChainConfigEntries_Overrides(t *testing.T) {
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
expectBefore: []ConfigEntryKindName{
|
expectBefore: []configentry.KindName{
|
||||||
NewConfigEntryKindName(structs.ServiceDefaults, "main", nil),
|
configentry.NewKindName(structs.ServiceDefaults, "main", nil),
|
||||||
NewConfigEntryKindName(structs.ServiceResolver, "main", nil),
|
configentry.NewKindName(structs.ServiceResolver, "main", nil),
|
||||||
NewConfigEntryKindName(structs.ServiceRouter, "main", nil),
|
configentry.NewKindName(structs.ServiceRouter, "main", nil),
|
||||||
},
|
},
|
||||||
overrides: map[ConfigEntryKindName]structs.ConfigEntry{
|
overrides: map[configentry.KindName]structs.ConfigEntry{
|
||||||
NewConfigEntryKindName(structs.ServiceRouter, "main", nil): &structs.ServiceRouterConfigEntry{
|
configentry.NewKindName(structs.ServiceRouter, "main", nil): &structs.ServiceRouterConfigEntry{
|
||||||
Kind: structs.ServiceRouter,
|
Kind: structs.ServiceRouter,
|
||||||
Name: "main",
|
Name: "main",
|
||||||
Routes: []structs.ServiceRoute{
|
Routes: []structs.ServiceRoute{
|
||||||
|
@ -1134,12 +1135,12 @@ func TestStore_ReadDiscoveryChainConfigEntries_Overrides(t *testing.T) {
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
expectAfter: []ConfigEntryKindName{
|
expectAfter: []configentry.KindName{
|
||||||
NewConfigEntryKindName(structs.ServiceDefaults, "main", nil),
|
configentry.NewKindName(structs.ServiceDefaults, "main", nil),
|
||||||
NewConfigEntryKindName(structs.ServiceResolver, "main", nil),
|
configentry.NewKindName(structs.ServiceResolver, "main", nil),
|
||||||
NewConfigEntryKindName(structs.ServiceRouter, "main", nil),
|
configentry.NewKindName(structs.ServiceRouter, "main", nil),
|
||||||
},
|
},
|
||||||
checkAfter: func(t *testing.T, entrySet *structs.DiscoveryChainConfigEntries) {
|
checkAfter: func(t *testing.T, entrySet *configentry.DiscoveryChainSet) {
|
||||||
router := entrySet.GetRouter(structs.NewServiceID("main", nil))
|
router := entrySet.GetRouter(structs.NewServiceID("main", nil))
|
||||||
require.NotNil(t, router)
|
require.NotNil(t, router)
|
||||||
require.Len(t, router.Routes, 1)
|
require.Len(t, router.Routes, 1)
|
||||||
|
@ -1174,15 +1175,15 @@ func TestStore_ReadDiscoveryChainConfigEntries_Overrides(t *testing.T) {
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
expectBefore: []ConfigEntryKindName{
|
expectBefore: []configentry.KindName{
|
||||||
NewConfigEntryKindName(structs.ServiceDefaults, "main", nil),
|
configentry.NewKindName(structs.ServiceDefaults, "main", nil),
|
||||||
NewConfigEntryKindName(structs.ServiceSplitter, "main", nil),
|
configentry.NewKindName(structs.ServiceSplitter, "main", nil),
|
||||||
},
|
},
|
||||||
overrides: map[ConfigEntryKindName]structs.ConfigEntry{
|
overrides: map[configentry.KindName]structs.ConfigEntry{
|
||||||
NewConfigEntryKindName(structs.ServiceSplitter, "main", nil): nil,
|
configentry.NewKindName(structs.ServiceSplitter, "main", nil): nil,
|
||||||
},
|
},
|
||||||
expectAfter: []ConfigEntryKindName{
|
expectAfter: []configentry.KindName{
|
||||||
NewConfigEntryKindName(structs.ServiceDefaults, "main", nil),
|
configentry.NewKindName(structs.ServiceDefaults, "main", nil),
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
@ -1201,12 +1202,12 @@ func TestStore_ReadDiscoveryChainConfigEntries_Overrides(t *testing.T) {
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
expectBefore: []ConfigEntryKindName{
|
expectBefore: []configentry.KindName{
|
||||||
NewConfigEntryKindName(structs.ServiceDefaults, "main", nil),
|
configentry.NewKindName(structs.ServiceDefaults, "main", nil),
|
||||||
NewConfigEntryKindName(structs.ServiceSplitter, "main", nil),
|
configentry.NewKindName(structs.ServiceSplitter, "main", nil),
|
||||||
},
|
},
|
||||||
overrides: map[ConfigEntryKindName]structs.ConfigEntry{
|
overrides: map[configentry.KindName]structs.ConfigEntry{
|
||||||
NewConfigEntryKindName(structs.ServiceSplitter, "main", nil): &structs.ServiceSplitterConfigEntry{
|
configentry.NewKindName(structs.ServiceSplitter, "main", nil): &structs.ServiceSplitterConfigEntry{
|
||||||
Kind: structs.ServiceSplitter,
|
Kind: structs.ServiceSplitter,
|
||||||
Name: "main",
|
Name: "main",
|
||||||
Splits: []structs.ServiceSplit{
|
Splits: []structs.ServiceSplit{
|
||||||
|
@ -1215,11 +1216,11 @@ func TestStore_ReadDiscoveryChainConfigEntries_Overrides(t *testing.T) {
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
expectAfter: []ConfigEntryKindName{
|
expectAfter: []configentry.KindName{
|
||||||
NewConfigEntryKindName(structs.ServiceDefaults, "main", nil),
|
configentry.NewKindName(structs.ServiceDefaults, "main", nil),
|
||||||
NewConfigEntryKindName(structs.ServiceSplitter, "main", nil),
|
configentry.NewKindName(structs.ServiceSplitter, "main", nil),
|
||||||
},
|
},
|
||||||
checkAfter: func(t *testing.T, entrySet *structs.DiscoveryChainConfigEntries) {
|
checkAfter: func(t *testing.T, entrySet *configentry.DiscoveryChainSet) {
|
||||||
splitter := entrySet.GetSplitter(structs.NewServiceID("main", nil))
|
splitter := entrySet.GetSplitter(structs.NewServiceID("main", nil))
|
||||||
require.NotNil(t, splitter)
|
require.NotNil(t, splitter)
|
||||||
require.Len(t, splitter.Splits, 2)
|
require.Len(t, splitter.Splits, 2)
|
||||||
|
@ -1240,13 +1241,13 @@ func TestStore_ReadDiscoveryChainConfigEntries_Overrides(t *testing.T) {
|
||||||
Name: "main",
|
Name: "main",
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
expectBefore: []ConfigEntryKindName{
|
expectBefore: []configentry.KindName{
|
||||||
NewConfigEntryKindName(structs.ServiceResolver, "main", nil),
|
configentry.NewKindName(structs.ServiceResolver, "main", nil),
|
||||||
},
|
},
|
||||||
overrides: map[ConfigEntryKindName]structs.ConfigEntry{
|
overrides: map[configentry.KindName]structs.ConfigEntry{
|
||||||
NewConfigEntryKindName(structs.ServiceResolver, "main", nil): nil,
|
configentry.NewKindName(structs.ServiceResolver, "main", nil): nil,
|
||||||
},
|
},
|
||||||
expectAfter: []ConfigEntryKindName{
|
expectAfter: []configentry.KindName{
|
||||||
// nothing
|
// nothing
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
@ -1258,20 +1259,20 @@ func TestStore_ReadDiscoveryChainConfigEntries_Overrides(t *testing.T) {
|
||||||
Name: "main",
|
Name: "main",
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
expectBefore: []ConfigEntryKindName{
|
expectBefore: []configentry.KindName{
|
||||||
NewConfigEntryKindName(structs.ServiceResolver, "main", nil),
|
configentry.NewKindName(structs.ServiceResolver, "main", nil),
|
||||||
},
|
},
|
||||||
overrides: map[ConfigEntryKindName]structs.ConfigEntry{
|
overrides: map[configentry.KindName]structs.ConfigEntry{
|
||||||
NewConfigEntryKindName(structs.ServiceResolver, "main", nil): &structs.ServiceResolverConfigEntry{
|
configentry.NewKindName(structs.ServiceResolver, "main", nil): &structs.ServiceResolverConfigEntry{
|
||||||
Kind: structs.ServiceResolver,
|
Kind: structs.ServiceResolver,
|
||||||
Name: "main",
|
Name: "main",
|
||||||
ConnectTimeout: 33 * time.Second,
|
ConnectTimeout: 33 * time.Second,
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
expectAfter: []ConfigEntryKindName{
|
expectAfter: []configentry.KindName{
|
||||||
NewConfigEntryKindName(structs.ServiceResolver, "main", nil),
|
configentry.NewKindName(structs.ServiceResolver, "main", nil),
|
||||||
},
|
},
|
||||||
checkAfter: func(t *testing.T, entrySet *structs.DiscoveryChainConfigEntries) {
|
checkAfter: func(t *testing.T, entrySet *configentry.DiscoveryChainSet) {
|
||||||
resolver := entrySet.GetResolver(structs.NewServiceID("main", nil))
|
resolver := entrySet.GetResolver(structs.NewServiceID("main", nil))
|
||||||
require.NotNil(t, resolver)
|
require.NotNil(t, resolver)
|
||||||
require.Equal(t, 33*time.Second, resolver.ConnectTimeout)
|
require.Equal(t, 33*time.Second, resolver.ConnectTimeout)
|
||||||
|
@ -1313,38 +1314,38 @@ func TestStore_ReadDiscoveryChainConfigEntries_Overrides(t *testing.T) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func entrySetToKindNames(entrySet *structs.DiscoveryChainConfigEntries) []ConfigEntryKindName {
|
func entrySetToKindNames(entrySet *configentry.DiscoveryChainSet) []configentry.KindName {
|
||||||
var out []ConfigEntryKindName
|
var out []configentry.KindName
|
||||||
for _, entry := range entrySet.Routers {
|
for _, entry := range entrySet.Routers {
|
||||||
out = append(out, NewConfigEntryKindName(
|
out = append(out, configentry.NewKindName(
|
||||||
entry.Kind,
|
entry.Kind,
|
||||||
entry.Name,
|
entry.Name,
|
||||||
&entry.EnterpriseMeta,
|
&entry.EnterpriseMeta,
|
||||||
))
|
))
|
||||||
}
|
}
|
||||||
for _, entry := range entrySet.Splitters {
|
for _, entry := range entrySet.Splitters {
|
||||||
out = append(out, NewConfigEntryKindName(
|
out = append(out, configentry.NewKindName(
|
||||||
entry.Kind,
|
entry.Kind,
|
||||||
entry.Name,
|
entry.Name,
|
||||||
&entry.EnterpriseMeta,
|
&entry.EnterpriseMeta,
|
||||||
))
|
))
|
||||||
}
|
}
|
||||||
for _, entry := range entrySet.Resolvers {
|
for _, entry := range entrySet.Resolvers {
|
||||||
out = append(out, NewConfigEntryKindName(
|
out = append(out, configentry.NewKindName(
|
||||||
entry.Kind,
|
entry.Kind,
|
||||||
entry.Name,
|
entry.Name,
|
||||||
&entry.EnterpriseMeta,
|
&entry.EnterpriseMeta,
|
||||||
))
|
))
|
||||||
}
|
}
|
||||||
for _, entry := range entrySet.Services {
|
for _, entry := range entrySet.Services {
|
||||||
out = append(out, NewConfigEntryKindName(
|
out = append(out, configentry.NewKindName(
|
||||||
entry.Kind,
|
entry.Kind,
|
||||||
entry.Name,
|
entry.Name,
|
||||||
&entry.EnterpriseMeta,
|
&entry.EnterpriseMeta,
|
||||||
))
|
))
|
||||||
}
|
}
|
||||||
for _, entry := range entrySet.ProxyDefaults {
|
for _, entry := range entrySet.ProxyDefaults {
|
||||||
out = append(out, NewConfigEntryKindName(
|
out = append(out, configentry.NewKindName(
|
||||||
entry.Kind,
|
entry.Kind,
|
||||||
entry.Name,
|
entry.Name,
|
||||||
&entry.EnterpriseMeta,
|
&entry.EnterpriseMeta,
|
||||||
|
|
|
@ -0,0 +1,48 @@
|
||||||
|
-----BEGIN CERTIFICATE-----
|
||||||
|
MIICUjCCATqgAwIBAgIUQjOIDzaM7bGW8bU69Yl0H0C4WXQwDQYJKoZIhvcNAQEL
|
||||||
|
BQAwFzEVMBMGA1UEAxMMY29ycG9yYXRlIENBMB4XDTIyMDEwNTIzMjIxMloXDTIy
|
||||||
|
MDQwNzE1MjI0MlowFTETMBEGA1UEAxMKcHJpbWFyeSBDQTBZMBMGByqGSM49AgEG
|
||||||
|
CCqGSM49AwEHA0IABEIcOmVSobge9pLDGh6rfyFg2+ilTFmo2ICv5vrgUfIZhi8O
|
||||||
|
fwYz5WGb7qBPRdMw9kP8BWH/lCrn2W3Ax3x2E+2jYzBhMA4GA1UdDwEB/wQEAwIB
|
||||||
|
BjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSdXCdDzlh7yj59xPu2LrcT6aFo
|
||||||
|
PjAfBgNVHSMEGDAWgBS+x+IFMFb+hCJy1OQzcdzJuwDVhDANBgkqhkiG9w0BAQsF
|
||||||
|
AAOCAQEAWWBBoygbjUEtoueGuC+jHAlr+VOBwiQPJLQA+xtbCvWSn8yIx/M1RyhY
|
||||||
|
0/6WLMzhYA1lQAIze8CgKzqoGXXIcHif3PRZ3mRUMNdV/qGUv0oHZBzTKZVySOIm
|
||||||
|
MLIoq7WvyVdVNxyvRalhHxiQA1Hrh+zQKjXhVPM6dpG0duTNYit9kJCCeNDzRjWc
|
||||||
|
a/GgFyeeYMTheU3eBR6Vp2A8hy2h5xw82ul8YLwX0bCtcP12XAUzj3jFqwt6RLxW
|
||||||
|
Wc7rvsLfgimEfulQwo2WLPWZw8bJdnPvNcUFX8f2Zvqy0Jg6fELnxO+AdHnAnI9J
|
||||||
|
WtJr0ImA95Hw8gGTzmXOddYVGHuGLA==
|
||||||
|
-----END CERTIFICATE-----
|
||||||
|
-----BEGIN CERTIFICATE-----
|
||||||
|
MIIDHzCCAgegAwIBAgIUDaEOI5nsEt9abNBbJibbQt+VZQIwDQYJKoZIhvcNAQEL
|
||||||
|
BQAwFzEVMBMGA1UEAxMMY29ycG9yYXRlIENBMB4XDTIyMDEwNTIzMjIxMloXDTIy
|
||||||
|
MDQxNTIzMjI0MlowFzEVMBMGA1UEAxMMY29ycG9yYXRlIENBMIIBIjANBgkqhkiG
|
||||||
|
9w0BAQEFAAOCAQ8AMIIBCgKCAQEAut/Gbr3MvypzEmRDTl7HGaSoVIydNEZNPqDD
|
||||||
|
jh1lqMFywB4DujTmkWLYcPJJ0RTT2NsSakteti/e1DHCuBSU0t3Q3K1paTh8aVLx
|
||||||
|
eK0IKNlCWqX5d1aYzCNZsRjJuQgPX6p/xcNGS+RS27jmRWPpvm6n1JfMvYRa7fF+
|
||||||
|
HnKhGNO+hDbhkQO4s0V1U+unNhshKDhTW3mBLmAEb2OHLOEaUZtYSbqr1E9tYXgU
|
||||||
|
DiYRkeWUpQXJ6pE91fmcaZFG0SxkqWnhe7GUa6wbb/vROWph4A1ZVHympBtOYwoJ
|
||||||
|
eibcJjBZLrugZdix8kl8NDI7SuIM/P0x0m9WkNfhJ9vSgQXlaQIDAQABo2MwYTAO
|
||||||
|
BgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUvsfiBTBW
|
||||||
|
/oQictTkM3HcybsA1YQwHwYDVR0jBBgwFoAUvsfiBTBW/oQictTkM3HcybsA1YQw
|
||||||
|
DQYJKoZIhvcNAQELBQADggEBALUJWitOV4xAvfNB8Z20AQ+/hdXkWVgj1VBbd++v
|
||||||
|
+X88q1TnueKAExU5o87MCjh9jMxalZqSVN9MUbQ4Xa+tmkjayizdpFaw6TbbaMIB
|
||||||
|
Tgqq5ATXMnOdZd46QC764Po9R9+k9hk4dNIr5gk1ifXZDMy/7jSOVARvpwzr0cTx
|
||||||
|
flRCTgZbcK10freoU7a74/YjEpG0wggGlR4aRWfm90Im9JM3aI55zAYQFzduf56c
|
||||||
|
HXJDLgBtbOx/ceqVrkPdvYwP9Q34tKAMiheQ0G3tTxP3Xc87gh4UEDV02oHhcbqw
|
||||||
|
WSm+8zTfGUlchowPRdqKE66urWTep+BA9c8zUqDdoq5lE9s=
|
||||||
|
-----END CERTIFICATE-----
|
||||||
|
-----BEGIN CERTIFICATE-----
|
||||||
|
MIICGTCCAZ+gAwIBAgIIJhC6ZZyZ/lQwCgYIKoZIzj0EAwMwFDESMBAGA1UEAxMJ
|
||||||
|
VGVzdCBDQSAxMB4XDTIyMDEwNTIzNDMyNVoXDTMyMDEwNTIzNDMyNVowFDESMBAG
|
||||||
|
A1UEAxMJVGVzdCBDQSAxMHYwEAYHKoZIzj0CAQYFK4EEACIDYgAETEyAhuLLOcxy
|
||||||
|
z2UHI7ePcB5AXL1o6mLwVfzyeaGfqUevzrFcLQ7WPiypZJW1KhOW5Q2bRgcjE8y3
|
||||||
|
fN+B8D+KT4fPtaRLtUVX6aZ0LCROFdgWjVo2DCvCq5VQnCGjW8r0o4G9MIG6MA4G
|
||||||
|
A1UdDwEB/wQEAwIBhjAPBgNVHRMBAf8EBTADAQH/MCkGA1UdDgQiBCBU/reewmUW
|
||||||
|
iduB8xxfW5clyUmrMewrWwtJuWPA/tFvTTArBgNVHSMEJDAigCBU/reewmUWiduB
|
||||||
|
8xxfW5clyUmrMewrWwtJuWPA/tFvTTA/BgNVHREEODA2hjRzcGlmZmU6Ly8xMTEx
|
||||||
|
MTExMS0yMjIyLTMzMzMtNDQ0NC01NTU1NTU1NTU1NTUuY29uc3VsMAoGCCqGSM49
|
||||||
|
BAMDA2gAMGUCMA4V/Iemelne4ZB+0glmxoKV6OPQ64oKkkrcy+vo1t1RZ+7jntRx
|
||||||
|
mxAnY3S2m35boQIxAOARpY+qfR3U3JM+vMW9KO0/KqM+y1/uvIaOA0bQex2w8bfN
|
||||||
|
V+QjFUDmjTT1dLpc7A==
|
||||||
|
-----END CERTIFICATE-----
|
|
@ -0,0 +1,34 @@
|
||||||
|
-----BEGIN CERTIFICATE-----
|
||||||
|
MIICUjCCATqgAwIBAgIUQjOIDzaM7bGW8bU69Yl0H0C4WXQwDQYJKoZIhvcNAQEL
|
||||||
|
BQAwFzEVMBMGA1UEAxMMY29ycG9yYXRlIENBMB4XDTIyMDEwNTIzMjIxMloXDTIy
|
||||||
|
MDQwNzE1MjI0MlowFTETMBEGA1UEAxMKcHJpbWFyeSBDQTBZMBMGByqGSM49AgEG
|
||||||
|
CCqGSM49AwEHA0IABEIcOmVSobge9pLDGh6rfyFg2+ilTFmo2ICv5vrgUfIZhi8O
|
||||||
|
fwYz5WGb7qBPRdMw9kP8BWH/lCrn2W3Ax3x2E+2jYzBhMA4GA1UdDwEB/wQEAwIB
|
||||||
|
BjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSdXCdDzlh7yj59xPu2LrcT6aFo
|
||||||
|
PjAfBgNVHSMEGDAWgBS+x+IFMFb+hCJy1OQzcdzJuwDVhDANBgkqhkiG9w0BAQsF
|
||||||
|
AAOCAQEAWWBBoygbjUEtoueGuC+jHAlr+VOBwiQPJLQA+xtbCvWSn8yIx/M1RyhY
|
||||||
|
0/6WLMzhYA1lQAIze8CgKzqoGXXIcHif3PRZ3mRUMNdV/qGUv0oHZBzTKZVySOIm
|
||||||
|
MLIoq7WvyVdVNxyvRalhHxiQA1Hrh+zQKjXhVPM6dpG0duTNYit9kJCCeNDzRjWc
|
||||||
|
a/GgFyeeYMTheU3eBR6Vp2A8hy2h5xw82ul8YLwX0bCtcP12XAUzj3jFqwt6RLxW
|
||||||
|
Wc7rvsLfgimEfulQwo2WLPWZw8bJdnPvNcUFX8f2Zvqy0Jg6fELnxO+AdHnAnI9J
|
||||||
|
WtJr0ImA95Hw8gGTzmXOddYVGHuGLA==
|
||||||
|
-----END CERTIFICATE-----
|
||||||
|
-----BEGIN CERTIFICATE-----
|
||||||
|
MIIDHzCCAgegAwIBAgIUDaEOI5nsEt9abNBbJibbQt+VZQIwDQYJKoZIhvcNAQEL
|
||||||
|
BQAwFzEVMBMGA1UEAxMMY29ycG9yYXRlIENBMB4XDTIyMDEwNTIzMjIxMloXDTIy
|
||||||
|
MDQxNTIzMjI0MlowFzEVMBMGA1UEAxMMY29ycG9yYXRlIENBMIIBIjANBgkqhkiG
|
||||||
|
9w0BAQEFAAOCAQ8AMIIBCgKCAQEAut/Gbr3MvypzEmRDTl7HGaSoVIydNEZNPqDD
|
||||||
|
jh1lqMFywB4DujTmkWLYcPJJ0RTT2NsSakteti/e1DHCuBSU0t3Q3K1paTh8aVLx
|
||||||
|
eK0IKNlCWqX5d1aYzCNZsRjJuQgPX6p/xcNGS+RS27jmRWPpvm6n1JfMvYRa7fF+
|
||||||
|
HnKhGNO+hDbhkQO4s0V1U+unNhshKDhTW3mBLmAEb2OHLOEaUZtYSbqr1E9tYXgU
|
||||||
|
DiYRkeWUpQXJ6pE91fmcaZFG0SxkqWnhe7GUa6wbb/vROWph4A1ZVHympBtOYwoJ
|
||||||
|
eibcJjBZLrugZdix8kl8NDI7SuIM/P0x0m9WkNfhJ9vSgQXlaQIDAQABo2MwYTAO
|
||||||
|
BgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUvsfiBTBW
|
||||||
|
/oQictTkM3HcybsA1YQwHwYDVR0jBBgwFoAUvsfiBTBW/oQictTkM3HcybsA1YQw
|
||||||
|
DQYJKoZIhvcNAQELBQADggEBALUJWitOV4xAvfNB8Z20AQ+/hdXkWVgj1VBbd++v
|
||||||
|
+X88q1TnueKAExU5o87MCjh9jMxalZqSVN9MUbQ4Xa+tmkjayizdpFaw6TbbaMIB
|
||||||
|
Tgqq5ATXMnOdZd46QC764Po9R9+k9hk4dNIr5gk1ifXZDMy/7jSOVARvpwzr0cTx
|
||||||
|
flRCTgZbcK10freoU7a74/YjEpG0wggGlR4aRWfm90Im9JM3aI55zAYQFzduf56c
|
||||||
|
HXJDLgBtbOx/ceqVrkPdvYwP9Q34tKAMiheQ0G3tTxP3Xc87gh4UEDV02oHhcbqw
|
||||||
|
WSm+8zTfGUlchowPRdqKE66urWTep+BA9c8zUqDdoq5lE9s=
|
||||||
|
-----END CERTIFICATE-----
|
|
@ -1333,143 +1333,6 @@ func canWriteDiscoveryChain(entry discoveryChainConfigEntry, authz acl.Authorize
|
||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
|
|
||||||
// DiscoveryChainConfigEntries wraps just the raw cross-referenced config
|
|
||||||
// entries. None of these are defaulted.
|
|
||||||
type DiscoveryChainConfigEntries struct {
|
|
||||||
Routers map[ServiceID]*ServiceRouterConfigEntry
|
|
||||||
Splitters map[ServiceID]*ServiceSplitterConfigEntry
|
|
||||||
Resolvers map[ServiceID]*ServiceResolverConfigEntry
|
|
||||||
Services map[ServiceID]*ServiceConfigEntry
|
|
||||||
ProxyDefaults map[string]*ProxyConfigEntry
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewDiscoveryChainConfigEntries() *DiscoveryChainConfigEntries {
|
|
||||||
return &DiscoveryChainConfigEntries{
|
|
||||||
Routers: make(map[ServiceID]*ServiceRouterConfigEntry),
|
|
||||||
Splitters: make(map[ServiceID]*ServiceSplitterConfigEntry),
|
|
||||||
Resolvers: make(map[ServiceID]*ServiceResolverConfigEntry),
|
|
||||||
Services: make(map[ServiceID]*ServiceConfigEntry),
|
|
||||||
ProxyDefaults: make(map[string]*ProxyConfigEntry),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (e *DiscoveryChainConfigEntries) GetRouter(sid ServiceID) *ServiceRouterConfigEntry {
|
|
||||||
if e.Routers != nil {
|
|
||||||
return e.Routers[sid]
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (e *DiscoveryChainConfigEntries) GetSplitter(sid ServiceID) *ServiceSplitterConfigEntry {
|
|
||||||
if e.Splitters != nil {
|
|
||||||
return e.Splitters[sid]
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (e *DiscoveryChainConfigEntries) GetResolver(sid ServiceID) *ServiceResolverConfigEntry {
|
|
||||||
if e.Resolvers != nil {
|
|
||||||
return e.Resolvers[sid]
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (e *DiscoveryChainConfigEntries) GetService(sid ServiceID) *ServiceConfigEntry {
|
|
||||||
if e.Services != nil {
|
|
||||||
return e.Services[sid]
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (e *DiscoveryChainConfigEntries) GetProxyDefaults(partition string) *ProxyConfigEntry {
|
|
||||||
if e.ProxyDefaults != nil {
|
|
||||||
return e.ProxyDefaults[partition]
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// AddRouters adds router configs. Convenience function for testing.
|
|
||||||
func (e *DiscoveryChainConfigEntries) AddRouters(entries ...*ServiceRouterConfigEntry) {
|
|
||||||
if e.Routers == nil {
|
|
||||||
e.Routers = make(map[ServiceID]*ServiceRouterConfigEntry)
|
|
||||||
}
|
|
||||||
for _, entry := range entries {
|
|
||||||
e.Routers[NewServiceID(entry.Name, &entry.EnterpriseMeta)] = entry
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// AddSplitters adds splitter configs. Convenience function for testing.
|
|
||||||
func (e *DiscoveryChainConfigEntries) AddSplitters(entries ...*ServiceSplitterConfigEntry) {
|
|
||||||
if e.Splitters == nil {
|
|
||||||
e.Splitters = make(map[ServiceID]*ServiceSplitterConfigEntry)
|
|
||||||
}
|
|
||||||
for _, entry := range entries {
|
|
||||||
e.Splitters[NewServiceID(entry.Name, entry.GetEnterpriseMeta())] = entry
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// AddResolvers adds resolver configs. Convenience function for testing.
|
|
||||||
func (e *DiscoveryChainConfigEntries) AddResolvers(entries ...*ServiceResolverConfigEntry) {
|
|
||||||
if e.Resolvers == nil {
|
|
||||||
e.Resolvers = make(map[ServiceID]*ServiceResolverConfigEntry)
|
|
||||||
}
|
|
||||||
for _, entry := range entries {
|
|
||||||
e.Resolvers[NewServiceID(entry.Name, entry.GetEnterpriseMeta())] = entry
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// AddServices adds service configs. Convenience function for testing.
|
|
||||||
func (e *DiscoveryChainConfigEntries) AddServices(entries ...*ServiceConfigEntry) {
|
|
||||||
if e.Services == nil {
|
|
||||||
e.Services = make(map[ServiceID]*ServiceConfigEntry)
|
|
||||||
}
|
|
||||||
for _, entry := range entries {
|
|
||||||
e.Services[NewServiceID(entry.Name, entry.GetEnterpriseMeta())] = entry
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// AddProxyDefaults adds proxy-defaults configs. Convenience function for testing.
|
|
||||||
func (e *DiscoveryChainConfigEntries) AddProxyDefaults(entries ...*ProxyConfigEntry) {
|
|
||||||
if e.ProxyDefaults == nil {
|
|
||||||
e.ProxyDefaults = make(map[string]*ProxyConfigEntry)
|
|
||||||
}
|
|
||||||
for _, entry := range entries {
|
|
||||||
e.ProxyDefaults[entry.PartitionOrDefault()] = entry
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// AddEntries adds generic configs. Convenience function for testing. Panics on
|
|
||||||
// operator error.
|
|
||||||
func (e *DiscoveryChainConfigEntries) AddEntries(entries ...ConfigEntry) {
|
|
||||||
for _, entry := range entries {
|
|
||||||
switch entry.GetKind() {
|
|
||||||
case ServiceRouter:
|
|
||||||
e.AddRouters(entry.(*ServiceRouterConfigEntry))
|
|
||||||
case ServiceSplitter:
|
|
||||||
e.AddSplitters(entry.(*ServiceSplitterConfigEntry))
|
|
||||||
case ServiceResolver:
|
|
||||||
e.AddResolvers(entry.(*ServiceResolverConfigEntry))
|
|
||||||
case ServiceDefaults:
|
|
||||||
e.AddServices(entry.(*ServiceConfigEntry))
|
|
||||||
case ProxyDefaults:
|
|
||||||
if entry.GetName() != ProxyConfigGlobal {
|
|
||||||
panic("the only supported proxy-defaults name is '" + ProxyConfigGlobal + "'")
|
|
||||||
}
|
|
||||||
e.AddProxyDefaults(entry.(*ProxyConfigEntry))
|
|
||||||
default:
|
|
||||||
panic("unhandled config entry kind: " + entry.GetKind())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (e *DiscoveryChainConfigEntries) IsEmpty() bool {
|
|
||||||
return e.IsChainEmpty() && len(e.Services) == 0 && len(e.ProxyDefaults) == 0
|
|
||||||
}
|
|
||||||
|
|
||||||
func (e *DiscoveryChainConfigEntries) IsChainEmpty() bool {
|
|
||||||
return len(e.Routers) == 0 && len(e.Splitters) == 0 && len(e.Resolvers) == 0
|
|
||||||
}
|
|
||||||
|
|
||||||
// DiscoveryChainRequest is used when requesting the discovery chain for a
|
// DiscoveryChainRequest is used when requesting the discovery chain for a
|
||||||
// service.
|
// service.
|
||||||
type DiscoveryChainRequest struct {
|
type DiscoveryChainRequest struct {
|
||||||
|
|
|
@ -66,14 +66,15 @@ func (r IndexedCARoots) Active() *CARoot {
|
||||||
|
|
||||||
// CARoot represents a root CA certificate that is trusted.
|
// CARoot represents a root CA certificate that is trusted.
|
||||||
type CARoot struct {
|
type CARoot struct {
|
||||||
// ID is a globally unique ID (UUID) representing this CA root.
|
// ID is a globally unique ID (UUID) representing this CA chain. It is
|
||||||
|
// calculated from the SHA1 of the primary CA certificate.
|
||||||
ID string
|
ID string
|
||||||
|
|
||||||
// Name is a human-friendly name for this CA root. This value is
|
// Name is a human-friendly name for this CA root. This value is
|
||||||
// opaque to Consul and is not used for anything internally.
|
// opaque to Consul and is not used for anything internally.
|
||||||
Name string
|
Name string
|
||||||
|
|
||||||
// SerialNumber is the x509 serial number of the certificate.
|
// SerialNumber is the x509 serial number of the primary CA certificate.
|
||||||
SerialNumber uint64
|
SerialNumber uint64
|
||||||
|
|
||||||
// SigningKeyID is the connect.HexString encoded id of the public key that
|
// SigningKeyID is the connect.HexString encoded id of the public key that
|
||||||
|
@ -96,8 +97,11 @@ type CARoot struct {
|
||||||
// future flexibility.
|
// future flexibility.
|
||||||
ExternalTrustDomain string
|
ExternalTrustDomain string
|
||||||
|
|
||||||
// Time validity bounds.
|
// NotBefore is the x509.Certificate.NotBefore value of the primary CA
|
||||||
|
// certificate. This value should generally be a time in the past.
|
||||||
NotBefore time.Time
|
NotBefore time.Time
|
||||||
|
// NotAfter is the x509.Certificate.NotAfter value of the primary CA
|
||||||
|
// certificate. This is the time when the certificate will expire.
|
||||||
NotAfter time.Time
|
NotAfter time.Time
|
||||||
|
|
||||||
// RootCert is the PEM-encoded public certificate for the root CA. The
|
// RootCert is the PEM-encoded public certificate for the root CA. The
|
||||||
|
|
|
@ -35,6 +35,7 @@ type ServiceSummary struct {
|
||||||
GatewayConfig GatewayConfig
|
GatewayConfig GatewayConfig
|
||||||
TransparentProxy bool
|
TransparentProxy bool
|
||||||
transparentProxySet bool
|
transparentProxySet bool
|
||||||
|
ConnectNative bool
|
||||||
|
|
||||||
structs.EnterpriseMeta
|
structs.EnterpriseMeta
|
||||||
}
|
}
|
||||||
|
@ -422,6 +423,7 @@ func summarizeServices(dump structs.ServiceDump, cfg *config.RuntimeConfig, dc s
|
||||||
sum.Kind = svc.Kind
|
sum.Kind = svc.Kind
|
||||||
sum.Datacenter = csn.Node.Datacenter
|
sum.Datacenter = csn.Node.Datacenter
|
||||||
sum.InstanceCount += 1
|
sum.InstanceCount += 1
|
||||||
|
sum.ConnectNative = svc.Connect.Native
|
||||||
if svc.Kind == structs.ServiceKindConnectProxy {
|
if svc.Kind == structs.ServiceKindConnectProxy {
|
||||||
sn := structs.NewServiceName(svc.Proxy.DestinationServiceName, &svc.EnterpriseMeta)
|
sn := structs.NewServiceName(svc.Proxy.DestinationServiceName, &svc.EnterpriseMeta)
|
||||||
hasProxy[sn] = true
|
hasProxy[sn] = true
|
||||||
|
|
|
@ -1281,6 +1281,142 @@ func TestUIServiceTopology(t *testing.T) {
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
"Node cnative": {
|
||||||
|
Datacenter: "dc1",
|
||||||
|
Node: "cnative",
|
||||||
|
Address: "127.0.0.6",
|
||||||
|
Checks: structs.HealthChecks{
|
||||||
|
&structs.HealthCheck{
|
||||||
|
Node: "cnative",
|
||||||
|
CheckID: "cnative:alive",
|
||||||
|
Name: "cnative-liveness",
|
||||||
|
Status: api.HealthPassing,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
"Service cbackend on cnative": {
|
||||||
|
Datacenter: "dc1",
|
||||||
|
Node: "cnative",
|
||||||
|
SkipNodeUpdate: true,
|
||||||
|
Service: &structs.NodeService{
|
||||||
|
Kind: structs.ServiceKindTypical,
|
||||||
|
ID: "cbackend",
|
||||||
|
Service: "cbackend",
|
||||||
|
Port: 8080,
|
||||||
|
Address: "198.18.1.70",
|
||||||
|
},
|
||||||
|
Checks: structs.HealthChecks{
|
||||||
|
&structs.HealthCheck{
|
||||||
|
Node: "cnative",
|
||||||
|
CheckID: "cnative:cbackend",
|
||||||
|
Name: "cbackend-liveness",
|
||||||
|
Status: api.HealthPassing,
|
||||||
|
ServiceID: "cbackend",
|
||||||
|
ServiceName: "cbackend",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
"Service cbackend-proxy on cnative": {
|
||||||
|
Datacenter: "dc1",
|
||||||
|
Node: "cnative",
|
||||||
|
SkipNodeUpdate: true,
|
||||||
|
Service: &structs.NodeService{
|
||||||
|
Kind: structs.ServiceKindConnectProxy,
|
||||||
|
ID: "cbackend-proxy",
|
||||||
|
Service: "cbackend-proxy",
|
||||||
|
Port: 8443,
|
||||||
|
Address: "198.18.1.70",
|
||||||
|
Proxy: structs.ConnectProxyConfig{
|
||||||
|
DestinationServiceName: "cbackend",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Checks: structs.HealthChecks{
|
||||||
|
&structs.HealthCheck{
|
||||||
|
Node: "cnative",
|
||||||
|
CheckID: "cnative:cbackend-proxy",
|
||||||
|
Name: "cbackend proxy listening",
|
||||||
|
Status: api.HealthCritical,
|
||||||
|
ServiceID: "cbackend-proxy",
|
||||||
|
ServiceName: "cbackend-proxy",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
"Service cfrontend on cnative": {
|
||||||
|
Datacenter: "dc1",
|
||||||
|
Node: "cnative",
|
||||||
|
SkipNodeUpdate: true,
|
||||||
|
Service: &structs.NodeService{
|
||||||
|
Kind: structs.ServiceKindTypical,
|
||||||
|
ID: "cfrontend",
|
||||||
|
Service: "cfrontend",
|
||||||
|
Port: 9080,
|
||||||
|
Address: "198.18.1.70",
|
||||||
|
},
|
||||||
|
Checks: structs.HealthChecks{
|
||||||
|
&structs.HealthCheck{
|
||||||
|
Node: "cnative",
|
||||||
|
CheckID: "cnative:cfrontend",
|
||||||
|
Name: "cfrontend-liveness",
|
||||||
|
Status: api.HealthPassing,
|
||||||
|
ServiceID: "cfrontend",
|
||||||
|
ServiceName: "cfrontend",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
"Service cfrontend-proxy on cnative": {
|
||||||
|
Datacenter: "dc1",
|
||||||
|
Node: "cnative",
|
||||||
|
SkipNodeUpdate: true,
|
||||||
|
Service: &structs.NodeService{
|
||||||
|
Kind: structs.ServiceKindConnectProxy,
|
||||||
|
ID: "cfrontend-proxy",
|
||||||
|
Service: "cfrontend-proxy",
|
||||||
|
Port: 9443,
|
||||||
|
Address: "198.18.1.70",
|
||||||
|
Proxy: structs.ConnectProxyConfig{
|
||||||
|
DestinationServiceName: "cfrontend",
|
||||||
|
Upstreams: structs.Upstreams{
|
||||||
|
{
|
||||||
|
DestinationName: "cproxy",
|
||||||
|
LocalBindPort: 123,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Checks: structs.HealthChecks{
|
||||||
|
&structs.HealthCheck{
|
||||||
|
Node: "cnative",
|
||||||
|
CheckID: "cnative:cfrontend-proxy",
|
||||||
|
Name: "cfrontend proxy listening",
|
||||||
|
Status: api.HealthCritical,
|
||||||
|
ServiceID: "cfrontend-proxy",
|
||||||
|
ServiceName: "cfrontend-proxy",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
"Service cproxy on cnative": {
|
||||||
|
Datacenter: "dc1",
|
||||||
|
Node: "cnative",
|
||||||
|
SkipNodeUpdate: true,
|
||||||
|
Service: &structs.NodeService{
|
||||||
|
Kind: structs.ServiceKindTypical,
|
||||||
|
ID: "cproxy",
|
||||||
|
Service: "cproxy",
|
||||||
|
Port: 1111,
|
||||||
|
Address: "198.18.1.70",
|
||||||
|
Connect: structs.ServiceConnect{Native: true},
|
||||||
|
},
|
||||||
|
Checks: structs.HealthChecks{
|
||||||
|
&structs.HealthCheck{
|
||||||
|
Node: "cnative",
|
||||||
|
CheckID: "cnative:cproxy",
|
||||||
|
Name: "cproxy-liveness",
|
||||||
|
Status: api.HealthPassing,
|
||||||
|
ServiceID: "cproxy",
|
||||||
|
ServiceName: "cproxy",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
}
|
}
|
||||||
for _, args := range registrations {
|
for _, args := range registrations {
|
||||||
var out struct{}
|
var out struct{}
|
||||||
|
@ -1292,6 +1428,8 @@ func TestUIServiceTopology(t *testing.T) {
|
||||||
// wildcard deny intention
|
// wildcard deny intention
|
||||||
// api -> web exact intention
|
// api -> web exact intention
|
||||||
// web -> redis exact intention
|
// web -> redis exact intention
|
||||||
|
// cfrontend -> cproxy exact intention
|
||||||
|
// cproxy -> cbackend exact intention
|
||||||
{
|
{
|
||||||
entries := []structs.ConfigEntryRequest{
|
entries := []structs.ConfigEntryRequest{
|
||||||
{
|
{
|
||||||
|
@ -1391,6 +1529,32 @@ func TestUIServiceTopology(t *testing.T) {
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
Datacenter: "dc1",
|
||||||
|
Entry: &structs.ServiceIntentionsConfigEntry{
|
||||||
|
Kind: structs.ServiceIntentions,
|
||||||
|
Name: "cproxy",
|
||||||
|
Sources: []*structs.SourceIntention{
|
||||||
|
{
|
||||||
|
Action: structs.IntentionActionAllow,
|
||||||
|
Name: "cfrontend",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Datacenter: "dc1",
|
||||||
|
Entry: &structs.ServiceIntentionsConfigEntry{
|
||||||
|
Kind: structs.ServiceIntentions,
|
||||||
|
Name: "cbackend",
|
||||||
|
Sources: []*structs.SourceIntention{
|
||||||
|
{
|
||||||
|
Action: structs.IntentionActionAllow,
|
||||||
|
Name: "cproxy",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
}
|
}
|
||||||
for _, req := range entries {
|
for _, req := range entries {
|
||||||
out := false
|
out := false
|
||||||
|
@ -1620,6 +1784,60 @@ func TestUIServiceTopology(t *testing.T) {
|
||||||
FilteredByACLs: false,
|
FilteredByACLs: false,
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
name: "cproxy",
|
||||||
|
httpReq: func() *http.Request {
|
||||||
|
req, _ := http.NewRequest("GET", "/v1/internal/ui/service-topology/cproxy?kind=", nil)
|
||||||
|
return req
|
||||||
|
}(),
|
||||||
|
want: &ServiceTopology{
|
||||||
|
Protocol: "http",
|
||||||
|
TransparentProxy: false,
|
||||||
|
Upstreams: []*ServiceTopologySummary{
|
||||||
|
{
|
||||||
|
ServiceSummary: ServiceSummary{
|
||||||
|
Name: "cbackend",
|
||||||
|
Datacenter: "dc1",
|
||||||
|
Nodes: []string{"cnative"},
|
||||||
|
InstanceCount: 1,
|
||||||
|
ChecksPassing: 2,
|
||||||
|
ChecksWarning: 0,
|
||||||
|
ChecksCritical: 1,
|
||||||
|
EnterpriseMeta: *structs.DefaultEnterpriseMetaInDefaultPartition(),
|
||||||
|
},
|
||||||
|
Intention: structs.IntentionDecisionSummary{
|
||||||
|
DefaultAllow: true,
|
||||||
|
Allowed: true,
|
||||||
|
HasPermissions: false,
|
||||||
|
HasExact: true,
|
||||||
|
},
|
||||||
|
Source: structs.TopologySourceSpecificIntention,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Downstreams: []*ServiceTopologySummary{
|
||||||
|
{
|
||||||
|
ServiceSummary: ServiceSummary{
|
||||||
|
Name: "cfrontend",
|
||||||
|
Datacenter: "dc1",
|
||||||
|
Nodes: []string{"cnative"},
|
||||||
|
InstanceCount: 1,
|
||||||
|
ChecksPassing: 2,
|
||||||
|
ChecksWarning: 0,
|
||||||
|
ChecksCritical: 1,
|
||||||
|
EnterpriseMeta: *structs.DefaultEnterpriseMetaInDefaultPartition(),
|
||||||
|
},
|
||||||
|
Intention: structs.IntentionDecisionSummary{
|
||||||
|
DefaultAllow: true,
|
||||||
|
Allowed: true,
|
||||||
|
HasPermissions: false,
|
||||||
|
HasExact: true,
|
||||||
|
},
|
||||||
|
Source: structs.TopologySourceRegistration,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
FilteredByACLs: false,
|
||||||
|
},
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, tc := range tcs {
|
for _, tc := range tcs {
|
||||||
|
|
|
@ -1304,7 +1304,7 @@ func (s *ResourceGenerator) getAndModifyUpstreamConfigForListener(
|
||||||
configMap = u.Config
|
configMap = u.Config
|
||||||
}
|
}
|
||||||
if chain == nil || chain.IsDefault() {
|
if chain == nil || chain.IsDefault() {
|
||||||
cfg, err = structs.ParseUpstreamConfig(configMap)
|
cfg, err = structs.ParseUpstreamConfigNoDefaults(configMap)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
// Don't hard fail on a config typo, just warn. The parse func returns
|
// Don't hard fail on a config typo, just warn. The parse func returns
|
||||||
// default config if there is an error so it's safe to continue.
|
// default config if there is an error so it's safe to continue.
|
||||||
|
@ -1327,6 +1327,7 @@ func (s *ResourceGenerator) getAndModifyUpstreamConfigForListener(
|
||||||
// Remove from config struct so we don't use it later on
|
// Remove from config struct so we don't use it later on
|
||||||
cfg.EnvoyListenerJSON = ""
|
cfg.EnvoyListenerJSON = ""
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
protocol := cfg.Protocol
|
protocol := cfg.Protocol
|
||||||
if protocol == "" {
|
if protocol == "" {
|
||||||
|
@ -1338,7 +1339,6 @@ func (s *ResourceGenerator) getAndModifyUpstreamConfigForListener(
|
||||||
|
|
||||||
// set back on the config so that we can use it from return value
|
// set back on the config so that we can use it from return value
|
||||||
cfg.Protocol = protocol
|
cfg.Protocol = protocol
|
||||||
}
|
|
||||||
|
|
||||||
return cfg
|
return cfg
|
||||||
}
|
}
|
||||||
|
|
|
@ -1149,6 +1149,70 @@ func TestListenersFromSnapshot(t *testing.T) {
|
||||||
snap.ConnectProxy.DiscoveryChain[UID("no-endpoints")] = discoverychain.TestCompileConfigEntries(t, "no-endpoints", "default", "default", "dc1", connect.TestClusterID+".consul", nil)
|
snap.ConnectProxy.DiscoveryChain[UID("no-endpoints")] = discoverychain.TestCompileConfigEntries(t, "no-endpoints", "default", "default", "dc1", connect.TestClusterID+".consul", nil)
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
name: "transparent-proxy-http-upstream",
|
||||||
|
create: proxycfg.TestConfigSnapshot,
|
||||||
|
setup: func(snap *proxycfg.ConfigSnapshot) {
|
||||||
|
snap.Proxy.Mode = structs.ProxyModeTransparent
|
||||||
|
|
||||||
|
snap.ConnectProxy.MeshConfigSet = true
|
||||||
|
|
||||||
|
// DiscoveryChain without an UpstreamConfig should yield a filter chain when in transparent proxy mode
|
||||||
|
google := structs.NewServiceName("google", nil)
|
||||||
|
googleUID := proxycfg.NewUpstreamIDFromServiceName(google)
|
||||||
|
snap.ConnectProxy.IntentionUpstreams = map[proxycfg.UpstreamID]struct{}{
|
||||||
|
googleUID: {},
|
||||||
|
}
|
||||||
|
snap.ConnectProxy.DiscoveryChain[googleUID] = discoverychain.TestCompileConfigEntries(t, "google", "default", "default", "dc1", connect.TestClusterID+".consul", nil,
|
||||||
|
// Set default service protocol to HTTP
|
||||||
|
&structs.ProxyConfigEntry{
|
||||||
|
Kind: structs.ProxyDefaults,
|
||||||
|
Name: structs.ProxyConfigGlobal,
|
||||||
|
Config: map[string]interface{}{
|
||||||
|
"protocol": "http",
|
||||||
|
},
|
||||||
|
})
|
||||||
|
|
||||||
|
snap.ConnectProxy.WatchedUpstreamEndpoints[googleUID] = map[string]structs.CheckServiceNodes{
|
||||||
|
"google.default.default.dc1": {
|
||||||
|
structs.CheckServiceNode{
|
||||||
|
Node: &structs.Node{
|
||||||
|
Address: "8.8.8.8",
|
||||||
|
Datacenter: "dc1",
|
||||||
|
},
|
||||||
|
Service: &structs.NodeService{
|
||||||
|
Service: "google",
|
||||||
|
Address: "9.9.9.9",
|
||||||
|
Port: 9090,
|
||||||
|
TaggedAddresses: map[string]structs.ServiceAddress{
|
||||||
|
"virtual": {Address: "10.0.0.1"},
|
||||||
|
structs.TaggedAddressVirtualIP: {Address: "240.0.0.1"},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
// Other targets of the discovery chain should be ignored.
|
||||||
|
// We only match on the upstream's virtual IP, not the IPs of other targets.
|
||||||
|
"google-v2.default.default.dc1": {
|
||||||
|
structs.CheckServiceNode{
|
||||||
|
Node: &structs.Node{
|
||||||
|
Address: "7.7.7.7",
|
||||||
|
Datacenter: "dc1",
|
||||||
|
},
|
||||||
|
Service: &structs.NodeService{
|
||||||
|
Service: "google-v2",
|
||||||
|
TaggedAddresses: map[string]structs.ServiceAddress{
|
||||||
|
"virtual": {Address: "10.10.10.10"},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
// DiscoveryChains without endpoints do not get a filter chain because there are no addresses to match on.
|
||||||
|
snap.ConnectProxy.DiscoveryChain[UID("no-endpoints")] = discoverychain.TestCompileConfigEntries(t, "no-endpoints", "default", "default", "dc1", connect.TestClusterID+".consul", nil)
|
||||||
|
},
|
||||||
|
},
|
||||||
{
|
{
|
||||||
name: "transparent-proxy-catalog-destinations-only",
|
name: "transparent-proxy-catalog-destinations-only",
|
||||||
create: proxycfg.TestConfigSnapshot,
|
create: proxycfg.TestConfigSnapshot,
|
||||||
|
|
203
agent/xds/testdata/listeners/transparent-proxy-http-upstream.envoy-1-20-x.golden
vendored
Normal file
203
agent/xds/testdata/listeners/transparent-proxy-http-upstream.envoy-1-20-x.golden
vendored
Normal file
|
@ -0,0 +1,203 @@
|
||||||
|
{
|
||||||
|
"versionInfo": "00000001",
|
||||||
|
"resources": [
|
||||||
|
{
|
||||||
|
"@type": "type.googleapis.com/envoy.config.listener.v3.Listener",
|
||||||
|
"name": "db:127.0.0.1:9191",
|
||||||
|
"address": {
|
||||||
|
"socketAddress": {
|
||||||
|
"address": "127.0.0.1",
|
||||||
|
"portValue": 9191
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"filterChains": [
|
||||||
|
{
|
||||||
|
"filters": [
|
||||||
|
{
|
||||||
|
"name": "envoy.filters.network.tcp_proxy",
|
||||||
|
"typedConfig": {
|
||||||
|
"@type": "type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy",
|
||||||
|
"statPrefix": "upstream.db.default.default.dc1",
|
||||||
|
"cluster": "db.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"trafficDirection": "OUTBOUND"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"@type": "type.googleapis.com/envoy.config.listener.v3.Listener",
|
||||||
|
"name": "outbound_listener:127.0.0.1:15001",
|
||||||
|
"address": {
|
||||||
|
"socketAddress": {
|
||||||
|
"address": "127.0.0.1",
|
||||||
|
"portValue": 15001
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"filterChains": [
|
||||||
|
{
|
||||||
|
"filterChainMatch": {
|
||||||
|
"prefixRanges": [
|
||||||
|
{
|
||||||
|
"addressPrefix": "10.0.0.1",
|
||||||
|
"prefixLen": 32
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"addressPrefix": "240.0.0.1",
|
||||||
|
"prefixLen": 32
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"filters": [
|
||||||
|
{
|
||||||
|
"name": "envoy.filters.network.http_connection_manager",
|
||||||
|
"typedConfig": {
|
||||||
|
"@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager",
|
||||||
|
"statPrefix": "upstream.google.default.default.dc1",
|
||||||
|
"routeConfig": {
|
||||||
|
"name": "google",
|
||||||
|
"virtualHosts": [
|
||||||
|
{
|
||||||
|
"name": "google.default.default.dc1",
|
||||||
|
"domains": [
|
||||||
|
"*"
|
||||||
|
],
|
||||||
|
"routes": [
|
||||||
|
{
|
||||||
|
"match": {
|
||||||
|
"prefix": "/"
|
||||||
|
},
|
||||||
|
"route": {
|
||||||
|
"cluster": "google.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"httpFilters": [
|
||||||
|
{
|
||||||
|
"name": "envoy.filters.http.router"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"tracing": {
|
||||||
|
"randomSampling": {
|
||||||
|
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"filters": [
|
||||||
|
{
|
||||||
|
"name": "envoy.filters.network.tcp_proxy",
|
||||||
|
"typedConfig": {
|
||||||
|
"@type": "type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy",
|
||||||
|
"statPrefix": "upstream.original-destination",
|
||||||
|
"cluster": "original-destination"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"listenerFilters": [
|
||||||
|
{
|
||||||
|
"name": "envoy.filters.listener.original_dst"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"trafficDirection": "OUTBOUND"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"@type": "type.googleapis.com/envoy.config.listener.v3.Listener",
|
||||||
|
"name": "prepared_query:geo-cache:127.10.10.10:8181",
|
||||||
|
"address": {
|
||||||
|
"socketAddress": {
|
||||||
|
"address": "127.10.10.10",
|
||||||
|
"portValue": 8181
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"filterChains": [
|
||||||
|
{
|
||||||
|
"filters": [
|
||||||
|
{
|
||||||
|
"name": "envoy.filters.network.tcp_proxy",
|
||||||
|
"typedConfig": {
|
||||||
|
"@type": "type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy",
|
||||||
|
"statPrefix": "upstream.prepared_query_geo-cache",
|
||||||
|
"cluster": "geo-cache.default.dc1.query.11111111-2222-3333-4444-555555555555.consul"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"trafficDirection": "OUTBOUND"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"@type": "type.googleapis.com/envoy.config.listener.v3.Listener",
|
||||||
|
"name": "public_listener:0.0.0.0:9999",
|
||||||
|
"address": {
|
||||||
|
"socketAddress": {
|
||||||
|
"address": "0.0.0.0",
|
||||||
|
"portValue": 9999
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"filterChains": [
|
||||||
|
{
|
||||||
|
"filters": [
|
||||||
|
{
|
||||||
|
"name": "envoy.filters.network.rbac",
|
||||||
|
"typedConfig": {
|
||||||
|
"@type": "type.googleapis.com/envoy.extensions.filters.network.rbac.v3.RBAC",
|
||||||
|
"rules": {
|
||||||
|
|
||||||
|
},
|
||||||
|
"statPrefix": "connect_authz"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "envoy.filters.network.tcp_proxy",
|
||||||
|
"typedConfig": {
|
||||||
|
"@type": "type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy",
|
||||||
|
"statPrefix": "public_listener",
|
||||||
|
"cluster": "local_app"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"transportSocket": {
|
||||||
|
"name": "tls",
|
||||||
|
"typedConfig": {
|
||||||
|
"@type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext",
|
||||||
|
"commonTlsContext": {
|
||||||
|
"tlsParams": {
|
||||||
|
|
||||||
|
},
|
||||||
|
"tlsCertificates": [
|
||||||
|
{
|
||||||
|
"certificateChain": {
|
||||||
|
"inlineString": "-----BEGIN CERTIFICATE-----\nMIICjDCCAjKgAwIBAgIIC5llxGV1gB8wCgYIKoZIzj0EAwIwFDESMBAGA1UEAxMJ\nVGVzdCBDQSAyMB4XDTE5MDMyMjEzNTgyNloXDTI5MDMyMjEzNTgyNlowDjEMMAoG\nA1UEAxMDd2ViMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEADPv1RHVNRfa2VKR\nAB16b6rZnEt7tuhaxCFpQXPj7M2omb0B9Favq5E0ivpNtv1QnFhxtPd7d5k4e+T7\nSkW1TaOCAXIwggFuMA4GA1UdDwEB/wQEAwIDuDAdBgNVHSUEFjAUBggrBgEFBQcD\nAgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADBoBgNVHQ4EYQRfN2Q6MDc6ODc6M2E6\nNDA6MTk6NDc6YzM6NWE6YzA6YmE6NjI6ZGY6YWY6NGI6ZDQ6MDU6MjU6NzY6M2Q6\nNWE6OGQ6MTY6OGQ6Njc6NWU6MmU6YTA6MzQ6N2Q6ZGM6ZmYwagYDVR0jBGMwYYBf\nZDE6MTE6MTE6YWM6MmE6YmE6OTc6YjI6M2Y6YWM6N2I6YmQ6ZGE6YmU6YjE6OGE6\nZmM6OWE6YmE6YjU6YmM6ODM6ZTc6NWU6NDE6NmY6ZjI6NzM6OTU6NTg6MGM6ZGIw\nWQYDVR0RBFIwUIZOc3BpZmZlOi8vMTExMTExMTEtMjIyMi0zMzMzLTQ0NDQtNTU1\nNTU1NTU1NTU1LmNvbnN1bC9ucy9kZWZhdWx0L2RjL2RjMS9zdmMvd2ViMAoGCCqG\nSM49BAMCA0gAMEUCIGC3TTvvjj76KMrguVyFf4tjOqaSCRie3nmHMRNNRav7AiEA\npY0heYeK9A6iOLrzqxSerkXXQyj5e9bE4VgUnxgPU6g=\n-----END CERTIFICATE-----\n"
|
||||||
|
},
|
||||||
|
"privateKey": {
|
||||||
|
"inlineString": "-----BEGIN EC PRIVATE KEY-----\nMHcCAQEEIMoTkpRggp3fqZzFKh82yS4LjtJI+XY+qX/7DefHFrtdoAoGCCqGSM49\nAwEHoUQDQgAEADPv1RHVNRfa2VKRAB16b6rZnEt7tuhaxCFpQXPj7M2omb0B9Fav\nq5E0ivpNtv1QnFhxtPd7d5k4e+T7SkW1TQ==\n-----END EC PRIVATE KEY-----\n"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"validationContext": {
|
||||||
|
"trustedCa": {
|
||||||
|
"inlineString": "-----BEGIN CERTIFICATE-----\nMIICXDCCAgKgAwIBAgIICpZq70Z9LyUwCgYIKoZIzj0EAwIwFDESMBAGA1UEAxMJ\nVGVzdCBDQSAyMB4XDTE5MDMyMjEzNTgyNloXDTI5MDMyMjEzNTgyNlowFDESMBAG\nA1UEAxMJVGVzdCBDQSAyMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEIhywH1gx\nAsMwuF3ukAI5YL2jFxH6Usnma1HFSfVyxbXX1/uoZEYrj8yCAtdU2yoHETyd+Zx2\nThhRLP79pYegCaOCATwwggE4MA4GA1UdDwEB/wQEAwIBhjAPBgNVHRMBAf8EBTAD\nAQH/MGgGA1UdDgRhBF9kMToxMToxMTphYzoyYTpiYTo5NzpiMjozZjphYzo3Yjpi\nZDpkYTpiZTpiMTo4YTpmYzo5YTpiYTpiNTpiYzo4MzplNzo1ZTo0MTo2ZjpmMjo3\nMzo5NTo1ODowYzpkYjBqBgNVHSMEYzBhgF9kMToxMToxMTphYzoyYTpiYTo5Nzpi\nMjozZjphYzo3YjpiZDpkYTpiZTpiMTo4YTpmYzo5YTpiYTpiNTpiYzo4MzplNzo1\nZTo0MTo2ZjpmMjo3Mzo5NTo1ODowYzpkYjA/BgNVHREEODA2hjRzcGlmZmU6Ly8x\nMTExMTExMS0yMjIyLTMzMzMtNDQ0NC01NTU1NTU1NTU1NTUuY29uc3VsMAoGCCqG\nSM49BAMCA0gAMEUCICOY0i246rQHJt8o8Oya0D5PLL1FnmsQmQqIGCi31RwnAiEA\noR5f6Ku+cig2Il8T8LJujOp2/2A72QcHZA57B13y+8o=\n-----END CERTIFICATE-----\n"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"requireClientCertificate": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"trafficDirection": "INBOUND"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"typeUrl": "type.googleapis.com/envoy.config.listener.v3.Listener",
|
||||||
|
"nonce": "00000001"
|
||||||
|
}
|
|
@ -53,7 +53,7 @@ type Namespaces struct {
|
||||||
c *Client
|
c *Client
|
||||||
}
|
}
|
||||||
|
|
||||||
// Operator returns a handle to the operator endpoints.
|
// Namespaces returns a handle to the namespaces endpoints.
|
||||||
func (c *Client) Namespaces() *Namespaces {
|
func (c *Client) Namespaces() *Namespaces {
|
||||||
return &Namespaces{c}
|
return &Namespaces{c}
|
||||||
}
|
}
|
||||||
|
|
|
@ -326,7 +326,7 @@ function build_consul {
|
||||||
-e CGO_ENABLED=0 \
|
-e CGO_ENABLED=0 \
|
||||||
-e GOLDFLAGS="${GOLDFLAGS}" \
|
-e GOLDFLAGS="${GOLDFLAGS}" \
|
||||||
-e GOTAGS="${GOTAGS}" \
|
-e GOTAGS="${GOTAGS}" \
|
||||||
${image_name} make linux
|
${image_name} make linux)
|
||||||
ret=$?
|
ret=$?
|
||||||
|
|
||||||
if test $ret -eq 0
|
if test $ret -eq 0
|
||||||
|
|
1
go.mod
1
go.mod
|
@ -22,6 +22,7 @@ require (
|
||||||
github.com/elazarl/go-bindata-assetfs v0.0.0-20160803192304-e1a2a7ec64b0
|
github.com/elazarl/go-bindata-assetfs v0.0.0-20160803192304-e1a2a7ec64b0
|
||||||
github.com/envoyproxy/go-control-plane v0.9.5
|
github.com/envoyproxy/go-control-plane v0.9.5
|
||||||
github.com/frankban/quicktest v1.11.0 // indirect
|
github.com/frankban/quicktest v1.11.0 // indirect
|
||||||
|
github.com/fsnotify/fsnotify v1.5.1
|
||||||
github.com/gogo/protobuf v1.3.2
|
github.com/gogo/protobuf v1.3.2
|
||||||
github.com/golang/protobuf v1.3.5
|
github.com/golang/protobuf v1.3.5
|
||||||
github.com/google/go-cmp v0.5.6
|
github.com/google/go-cmp v0.5.6
|
||||||
|
|
3
go.sum
3
go.sum
|
@ -142,6 +142,8 @@ github.com/fatih/structs v1.1.0/go.mod h1:9NiDSp5zOcgEDl+j00MP/WkGVPOlPRLejGD8Ga
|
||||||
github.com/frankban/quicktest v1.11.0 h1:Yyrghcw93e1jKo4DTZkRFTTFvBsVhzbblBUPNU1vW6Q=
|
github.com/frankban/quicktest v1.11.0 h1:Yyrghcw93e1jKo4DTZkRFTTFvBsVhzbblBUPNU1vW6Q=
|
||||||
github.com/frankban/quicktest v1.11.0/go.mod h1:K+q6oSqb0W0Ininfk863uOk1lMy69l/P6txr3mVT54s=
|
github.com/frankban/quicktest v1.11.0/go.mod h1:K+q6oSqb0W0Ininfk863uOk1lMy69l/P6txr3mVT54s=
|
||||||
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
|
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
|
||||||
|
github.com/fsnotify/fsnotify v1.5.1 h1:mZcQUHVQUQWoPXXtuf9yuEXKudkV2sx1E06UadKWpgI=
|
||||||
|
github.com/fsnotify/fsnotify v1.5.1/go.mod h1:T3375wBYaZdLLcVNkcVbzGHY7f1l/uK5T5Ai1i3InKU=
|
||||||
github.com/ghodss/yaml v0.0.0-20150909031657-73d445a93680/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
|
github.com/ghodss/yaml v0.0.0-20150909031657-73d445a93680/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
|
||||||
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
|
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
|
||||||
github.com/go-asn1-ber/asn1-ber v1.3.1/go.mod h1:hEBeB/ic+5LoWskz+yKT7vGhhPYkProFKoKdwZRWMe0=
|
github.com/go-asn1-ber/asn1-ber v1.3.1/go.mod h1:hEBeB/ic+5LoWskz+yKT7vGhhPYkProFKoKdwZRWMe0=
|
||||||
|
@ -649,6 +651,7 @@ golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7w
|
||||||
golang.org/x/sys v0.0.0-20210303074136-134d130e1a04/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20210303074136-134d130e1a04/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
|
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.0.0-20210816074244-15123e1e1f71/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20210816074244-15123e1e1f71/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.0.0-20211013075003-97ac67df715c h1:taxlMj0D/1sOAuv/CbSD+MMDof2vbyPTqz5FNYKpXt8=
|
golang.org/x/sys v0.0.0-20211013075003-97ac67df715c h1:taxlMj0D/1sOAuv/CbSD+MMDof2vbyPTqz5FNYKpXt8=
|
||||||
golang.org/x/sys v0.0.0-20211013075003-97ac67df715c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20211013075003-97ac67df715c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
|
|
|
@ -20,7 +20,10 @@
|
||||||
"npm-run-all": "^4.1.5"
|
"npm-run-all": "^4.1.5"
|
||||||
},
|
},
|
||||||
"resolutions": {
|
"resolutions": {
|
||||||
"xmlhttprequest-ssl": "^1.6.3"
|
"xmlhttprequest-ssl": "^1.6.3",
|
||||||
|
"ember-basic-dropdown": "3.0.21",
|
||||||
|
"ember-changeset": "3.10.1",
|
||||||
|
"validated-changeset": "0.10.0"
|
||||||
},
|
},
|
||||||
"engines": {
|
"engines": {
|
||||||
"node": ">=10 <=14"
|
"node": ">=10 <=14"
|
||||||
|
|
|
@ -119,32 +119,32 @@
|
||||||
Logout
|
Logout
|
||||||
</Action>
|
</Action>
|
||||||
</Portal>
|
</Portal>
|
||||||
<PopoverMenu @position="right" as |components api|>
|
<DisclosureMenu as |disclosure|>
|
||||||
<BlockSlot @name="trigger">
|
<disclosure.Action
|
||||||
|
{{on 'click' disclosure.toggle}}
|
||||||
|
>
|
||||||
Logout
|
Logout
|
||||||
</BlockSlot>
|
</disclosure.Action>
|
||||||
<BlockSlot @name="menu">
|
<disclosure.Menu as |panel|>
|
||||||
{{#let components.MenuItem components.MenuSeparator as |MenuItem MenuSeparator|}}
|
{{#if authDialog.token.AccessorID}}
|
||||||
{{!TODO: It might be nice to use one of our recursive components here}}
|
|
||||||
{{#if authDialog.token.AccessorID}}
|
|
||||||
<li role="none">
|
|
||||||
<AuthProfile
|
<AuthProfile
|
||||||
@item={{authDialog.token}}
|
@item={{authDialog.token}}
|
||||||
/>
|
/>
|
||||||
</li>
|
{{/if}}
|
||||||
<MenuSeparator />
|
<panel.Menu as |menu|>
|
||||||
{{/if}}
|
<menu.Separator />
|
||||||
<MenuItem
|
<menu.Item
|
||||||
class="dangerous"
|
class="dangerous"
|
||||||
@onclick={{action authDialog.logout}}
|
|
||||||
>
|
>
|
||||||
<BlockSlot @name="label">
|
<menu.Action
|
||||||
|
{{on 'click' (optional authDialog.logout)}}
|
||||||
|
>
|
||||||
Logout
|
Logout
|
||||||
</BlockSlot>
|
</menu.Action>
|
||||||
</MenuItem>
|
</menu.Item>
|
||||||
{{/let}}
|
</panel.Menu>
|
||||||
</BlockSlot>
|
</disclosure.Menu>
|
||||||
</PopoverMenu>
|
</DisclosureMenu>
|
||||||
</:authorized>
|
</:authorized>
|
||||||
</AuthDialog>
|
</AuthDialog>
|
||||||
|
|
||||||
|
|
|
@ -1,21 +1,21 @@
|
||||||
{{#if (can "use nspaces")}}
|
{{#if (can "use nspaces")}}
|
||||||
{{#if (can "choose nspaces")}}
|
{{#if (can "choose nspaces")}}
|
||||||
{{#let
|
{{#let
|
||||||
(or @nspace 'default')
|
(or @nspace 'default')
|
||||||
as |nspace|}}
|
as |nspace|}}
|
||||||
<li
|
<li
|
||||||
class="nspaces"
|
class="nspaces"
|
||||||
data-test-nspace-menu
|
data-test-nspace-menu
|
||||||
>
|
>
|
||||||
<PopoverMenu
|
<DisclosureMenu
|
||||||
aria-label="Namespace"
|
aria-label="Namespace"
|
||||||
@position="left"
|
as |disclosure|>
|
||||||
as |components api|>
|
<disclosure.Action
|
||||||
<BlockSlot @name="trigger">
|
{{on 'click' disclosure.toggle}}
|
||||||
|
>
|
||||||
{{nspace}}
|
{{nspace}}
|
||||||
</BlockSlot>
|
</disclosure.Action>
|
||||||
<BlockSlot @name="menu">
|
<disclosure.Menu as |panel|>
|
||||||
{{#let components.MenuItem components.MenuSeparator as |MenuItem MenuSeparator|}}
|
|
||||||
{{#if (gt @nspaces.length 0)}}
|
{{#if (gt @nspaces.length 0)}}
|
||||||
<DataSource
|
<DataSource
|
||||||
@src={{uri
|
@src={{uri
|
||||||
|
@ -26,7 +26,6 @@ as |nspace|}}
|
||||||
)
|
)
|
||||||
}}
|
}}
|
||||||
@onchange={{fn (optional @onchange)}}
|
@onchange={{fn (optional @onchange)}}
|
||||||
@loading="lazy"
|
|
||||||
/>
|
/>
|
||||||
{{else}}
|
{{else}}
|
||||||
<DataSource
|
<DataSource
|
||||||
|
@ -40,35 +39,39 @@ as |nspace|}}
|
||||||
@onchange={{fn (optional @onchange)}}
|
@onchange={{fn (optional @onchange)}}
|
||||||
/>
|
/>
|
||||||
{{/if}}
|
{{/if}}
|
||||||
|
<panel.Menu as |menu|>
|
||||||
{{#each (reject-by 'DeletedAt' @nspaces) as |item|}}
|
{{#each (reject-by 'DeletedAt' @nspaces) as |item|}}
|
||||||
<MenuItem
|
<menu.Item
|
||||||
class={{if (eq nspace item.Name) 'is-active'}}
|
aria-current={{if (eq nspace item.Name) 'true'}}
|
||||||
|
>
|
||||||
|
<menu.Action
|
||||||
|
{{on 'click' disclosure.close}}
|
||||||
@href={{href-to '.' params=(hash
|
@href={{href-to '.' params=(hash
|
||||||
partition=(if (gt @partition.length 0) @partition undefined)
|
partition=(if (gt @partition.length 0) @partition undefined)
|
||||||
nspace=item.Name
|
nspace=item.Name
|
||||||
)}}
|
)}}
|
||||||
>
|
>
|
||||||
<BlockSlot @name="label">
|
|
||||||
{{item.Name}}
|
{{item.Name}}
|
||||||
</BlockSlot>
|
</menu.Action>
|
||||||
</MenuItem>
|
</menu.Item>
|
||||||
{{/each}}
|
{{/each}}
|
||||||
{{#if (can 'manage nspaces')}}
|
{{#if (can 'manage nspaces')}}
|
||||||
<MenuSeparator />
|
<menu.Separator />
|
||||||
<MenuItem
|
<menu.Item
|
||||||
data-test-main-nav-nspaces
|
data-test-main-nav-nspaces
|
||||||
|
>
|
||||||
|
<menu.Action
|
||||||
|
{{on 'click' disclosure.close}}
|
||||||
@href={{href-to 'dc.nspaces' @dc.Name}}
|
@href={{href-to 'dc.nspaces' @dc.Name}}
|
||||||
>
|
>
|
||||||
<BlockSlot @name="label">
|
|
||||||
Manage Namespaces
|
Manage Namespaces
|
||||||
</BlockSlot>
|
</menu.Action>
|
||||||
</MenuItem>
|
</menu.Item>
|
||||||
{{/if}}
|
{{/if}}
|
||||||
{{/let}}
|
</panel.Menu>
|
||||||
</BlockSlot>
|
</disclosure.Menu>
|
||||||
</PopoverMenu>
|
</DisclosureMenu>
|
||||||
</li>
|
</li>
|
||||||
{{/let}}
|
{{/let}}
|
||||||
{{/if}}
|
{{/if}}
|
||||||
{{/if}}
|
{{/if}}
|
||||||
|
|
||||||
|
|
|
@ -6,15 +6,15 @@ as |partition|}}
|
||||||
class="partitions"
|
class="partitions"
|
||||||
data-test-partition-menu
|
data-test-partition-menu
|
||||||
>
|
>
|
||||||
<PopoverMenu
|
<DisclosureMenu
|
||||||
aria-label="Admin Partition"
|
aria-label="Admin Partition"
|
||||||
@position="left"
|
as |disclosure|>
|
||||||
as |components api|>
|
<disclosure.Action
|
||||||
<BlockSlot @name="trigger">
|
{{on 'click' disclosure.toggle}}
|
||||||
|
>
|
||||||
{{partition}}
|
{{partition}}
|
||||||
</BlockSlot>
|
</disclosure.Action>
|
||||||
<BlockSlot @name="menu">
|
<disclosure.Menu as |panel|>
|
||||||
{{#let components.MenuItem components.MenuSeparator as |MenuItem MenuSeparator|}}
|
|
||||||
<DataSource
|
<DataSource
|
||||||
@src={{uri
|
@src={{uri
|
||||||
'/*/*/${dc}/partitions'
|
'/*/*/${dc}/partitions'
|
||||||
|
@ -24,33 +24,38 @@ as |partition|}}
|
||||||
}}
|
}}
|
||||||
@onchange={{fn (optional @onchange)}}
|
@onchange={{fn (optional @onchange)}}
|
||||||
/>
|
/>
|
||||||
|
<panel.Menu as |menu|>
|
||||||
{{#each (reject-by 'DeletedAt' @partitions) as |item|}}
|
{{#each (reject-by 'DeletedAt' @partitions) as |item|}}
|
||||||
<MenuItem
|
<menu.Item
|
||||||
class={{if (eq partition item.Name) 'is-active'}}
|
class={{if (eq partition item.Name) 'is-active'}}
|
||||||
|
>
|
||||||
|
<menu.Action
|
||||||
|
{{on 'click' disclosure.close}}
|
||||||
@href={{href-to '.' params=(hash
|
@href={{href-to '.' params=(hash
|
||||||
partition=item.Name
|
partition=item.Name
|
||||||
nspace=undefined
|
nspace=undefined
|
||||||
)}}
|
)}}
|
||||||
>
|
>
|
||||||
<BlockSlot @name="label">
|
|
||||||
{{item.Name}}
|
{{item.Name}}
|
||||||
</BlockSlot>
|
</menu.Action>
|
||||||
</MenuItem>
|
</menu.Item>
|
||||||
{{/each}}
|
{{/each}}
|
||||||
{{#if (can 'manage partitions')}}
|
{{#if (can 'manage partitions')}}
|
||||||
<MenuSeparator />
|
<menu.Separator />
|
||||||
<MenuItem
|
<menu.Item
|
||||||
data-test-main-nav-partitions
|
data-test-main-nav-partitions
|
||||||
|
>
|
||||||
|
<menu.Action
|
||||||
|
{{on 'click' disclosure.close}}
|
||||||
@href={{href-to 'dc.partitions.index' @dc.Name}}
|
@href={{href-to 'dc.partitions.index' @dc.Name}}
|
||||||
>
|
>
|
||||||
<BlockSlot @name="label">
|
|
||||||
Manage Partitions
|
Manage Partitions
|
||||||
</BlockSlot>
|
</menu.Action>
|
||||||
</MenuItem>
|
</menu.Item>
|
||||||
{{/if}}
|
{{/if}}
|
||||||
{{/let}}
|
</panel.Menu>
|
||||||
</BlockSlot>
|
</disclosure.Menu>
|
||||||
</PopoverMenu>
|
</DisclosureMenu>
|
||||||
</li>
|
</li>
|
||||||
{{else}}
|
{{else}}
|
||||||
<li
|
<li
|
||||||
|
|
|
@ -1,9 +1,11 @@
|
||||||
const path = require('path');
|
const path = require('path');
|
||||||
|
|
||||||
const autolinkHeadings = require('remark-autolink-headings');
|
const autolinkHeadings = require('remark-autolink-headings');
|
||||||
|
const prism = require('./lib/rehype-prism/index');
|
||||||
const refractor = require('refractor');
|
const refractor = require('refractor');
|
||||||
const gherkin = require('refractor/lang/gherkin');
|
const gherkin = require('refractor/lang/gherkin');
|
||||||
const prism = require('@mapbox/rehype-prism');
|
const mermaid = require('refractor/lang/mermaid');
|
||||||
|
const handlebars = require('refractor/lang/handlebars');
|
||||||
|
|
||||||
const fs = require('fs');
|
const fs = require('fs');
|
||||||
const read = fs.readFileSync;
|
const read = fs.readFileSync;
|
||||||
|
@ -26,8 +28,14 @@ if($CONSUL_DOCFY_CONFIG.length > 0) {
|
||||||
}
|
}
|
||||||
|
|
||||||
refractor.register(gherkin);
|
refractor.register(gherkin);
|
||||||
refractor.alias('handlebars', 'hbs');
|
refractor.register(mermaid);
|
||||||
refractor.alias('shell', 'sh');
|
refractor.register(handlebars);
|
||||||
|
|
||||||
|
refractor.alias({
|
||||||
|
handlebars: ['hbs'],
|
||||||
|
shell: ['sh']
|
||||||
|
});
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
module.exports = {
|
module.exports = {
|
||||||
|
|
|
@ -52,13 +52,22 @@
|
||||||
margin-left: auto;
|
margin-left: auto;
|
||||||
}
|
}
|
||||||
%main-nav-vertical-hoisted {
|
%main-nav-vertical-hoisted {
|
||||||
top: 11px;
|
top: 18px;
|
||||||
}
|
}
|
||||||
%main-nav-vertical-hoisted > .popover-menu > label > button {
|
%main-nav-vertical-hoisted [aria-label]::before {
|
||||||
|
display: none !important;
|
||||||
|
}
|
||||||
|
%main-nav-horizontal [aria-haspopup='menu'] ~ * {
|
||||||
|
position: absolute;
|
||||||
|
right: 0;
|
||||||
|
min-width: 192px;
|
||||||
|
}
|
||||||
|
%main-nav-horizontal [aria-expanded],
|
||||||
|
%main-nav-vertical-hoisted [aria-expanded] {
|
||||||
|
@extend %main-nav-horizontal-popover-menu-trigger;
|
||||||
@extend %main-nav-horizontal-action;
|
@extend %main-nav-horizontal-action;
|
||||||
border: none;
|
|
||||||
}
|
}
|
||||||
%main-nav-vertical-hoisted.is-active > label > * {
|
%main-nav-horizontal-popover-menu-trigger {
|
||||||
@extend %main-nav-horizontal-action-active;
|
@extend %main-nav-horizontal-action-active;
|
||||||
}
|
}
|
||||||
%footer,
|
%footer,
|
||||||
|
|
|
@ -1,66 +0,0 @@
|
||||||
# CollapsibleNotices
|
|
||||||
|
|
||||||
Used as a wrapper to collapse the details of `<Notices/>`.
|
|
||||||
|
|
||||||
```hbs preview-template
|
|
||||||
<CollapsibleNotices>
|
|
||||||
<Notice
|
|
||||||
@type="error"
|
|
||||||
role="alert"
|
|
||||||
as |notice|>
|
|
||||||
<notice.Header>
|
|
||||||
<h3>Header</h3>
|
|
||||||
</notice.Header>
|
|
||||||
<notice.Body>
|
|
||||||
<p>
|
|
||||||
Body
|
|
||||||
</p>
|
|
||||||
</notice.Body>
|
|
||||||
</Notice>
|
|
||||||
<Notice
|
|
||||||
@type="info"
|
|
||||||
as |notice|>
|
|
||||||
<notice.Header>
|
|
||||||
<h3>Header</h3>
|
|
||||||
</notice.Header>
|
|
||||||
<notice.Body>
|
|
||||||
<p>
|
|
||||||
Body
|
|
||||||
</p>
|
|
||||||
</notice.Body>
|
|
||||||
<notice.Footer>
|
|
||||||
<p>
|
|
||||||
Footer
|
|
||||||
</p>
|
|
||||||
</notice.Footer>
|
|
||||||
</Notice>
|
|
||||||
<Notice
|
|
||||||
@type="warning"
|
|
||||||
as |notice|>
|
|
||||||
<notice.Header>
|
|
||||||
<h3>Header</h3>
|
|
||||||
</notice.Header>
|
|
||||||
<notice.Body>
|
|
||||||
<p>
|
|
||||||
Body
|
|
||||||
</p>
|
|
||||||
</notice.Body>
|
|
||||||
<notice.Footer>
|
|
||||||
<p>
|
|
||||||
Footer
|
|
||||||
</p>
|
|
||||||
</notice.Footer>
|
|
||||||
</Notice>
|
|
||||||
</CollapsibleNotices>
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
## Arguments
|
|
||||||
|
|
||||||
No arguments required. Wrap this component around the needed notices.
|
|
||||||
|
|
||||||
## See
|
|
||||||
|
|
||||||
- [Template Source Code](./index.hbs)
|
|
||||||
|
|
||||||
---
|
|
|
@ -1,14 +0,0 @@
|
||||||
{{#if @collapsible}}
|
|
||||||
<div class="collapsible-notices {{if this.collapsed 'collapsed'}}">
|
|
||||||
<div class="notices">
|
|
||||||
{{yield}}
|
|
||||||
</div>
|
|
||||||
{{#if this.collapsed}}
|
|
||||||
<button type="button" class="expand" {{on 'click' (set this 'collapsed' false)}}>{{t "components.app.collapsible-notices.expand"}}</button>
|
|
||||||
{{else}}
|
|
||||||
<button type="button" class="collapse" {{on 'click' (set this 'collapsed' true)}}>{{t "components.app.collapsible-notices.collapse"}}</button>
|
|
||||||
{{/if}}
|
|
||||||
</div>
|
|
||||||
{{else}}
|
|
||||||
{{yield}}
|
|
||||||
{{/if}}
|
|
|
@ -1,3 +0,0 @@
|
||||||
import Component from '@glimmer/component';
|
|
||||||
|
|
||||||
export default class CollapsibleNotices extends Component {}
|
|
|
@ -1,31 +0,0 @@
|
||||||
.collapsible-notices {
|
|
||||||
display: grid;
|
|
||||||
grid-template-columns: auto 168px;
|
|
||||||
grid-template-rows: auto 55px;
|
|
||||||
grid-template-areas:
|
|
||||||
'notices notices'
|
|
||||||
'. toggle-button';
|
|
||||||
&.collapsed p {
|
|
||||||
display: none;
|
|
||||||
}
|
|
||||||
.notices {
|
|
||||||
grid-area: notices;
|
|
||||||
:last-child {
|
|
||||||
margin-bottom: 0;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
button {
|
|
||||||
@extend %button;
|
|
||||||
color: rgb(var(--color-action));
|
|
||||||
float: right;
|
|
||||||
grid-area: toggle-button;
|
|
||||||
margin-top: 1em;
|
|
||||||
margin-bottom: 2em;
|
|
||||||
}
|
|
||||||
button.expand::before {
|
|
||||||
@extend %with-chevron-down-mask, %as-pseudo;
|
|
||||||
}
|
|
||||||
button.collapse::before {
|
|
||||||
@extend %with-chevron-up-mask, %as-pseudo;
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -0,0 +1,49 @@
|
||||||
|
<li
|
||||||
|
class="dcs"
|
||||||
|
data-test-datacenter-menu
|
||||||
|
>
|
||||||
|
<DisclosureMenu
|
||||||
|
aria-label="Datacenter"
|
||||||
|
as |disclosure|>
|
||||||
|
<disclosure.Action
|
||||||
|
{{on 'click' disclosure.toggle}}
|
||||||
|
>
|
||||||
|
{{@dc.Name}}
|
||||||
|
</disclosure.Action>
|
||||||
|
<disclosure.Menu as |panel|>
|
||||||
|
<DataSource
|
||||||
|
@src={{uri '/*/*/*/datacenters'}}
|
||||||
|
@onchange={{action (mut @dcs) value="data"}}
|
||||||
|
/>
|
||||||
|
<panel.Menu as |menu|>
|
||||||
|
{{#each (sort-by 'Name' @dcs) as |item|}}
|
||||||
|
<menu.Item
|
||||||
|
aria-current={{if (eq @dc.Name item.Name) 'true'}}
|
||||||
|
class={{class-map
|
||||||
|
(array 'is-local' item.Local)
|
||||||
|
(array 'is-primary' item.Primary)
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
<menu.Action
|
||||||
|
{{on 'click' disclosure.close}}
|
||||||
|
@href={{href-to '.' params=(hash
|
||||||
|
dc=item.Name
|
||||||
|
partition=undefined
|
||||||
|
nspace=(if (gt @nspace.length 0) @nspace undefined)
|
||||||
|
)}}
|
||||||
|
>
|
||||||
|
{{item.Name}}
|
||||||
|
{{#if item.Primary}}
|
||||||
|
<span>Primary</span>
|
||||||
|
{{/if}}
|
||||||
|
{{#if item.Local}}
|
||||||
|
<span>Local</span>
|
||||||
|
{{/if}}
|
||||||
|
</menu.Action>
|
||||||
|
</menu.Item>
|
||||||
|
{{/each}}
|
||||||
|
</panel.Menu>
|
||||||
|
</disclosure.Menu>
|
||||||
|
</DisclosureMenu>
|
||||||
|
</li>
|
||||||
|
|
|
@ -19,13 +19,15 @@ common usecase of having a floating menu.
|
||||||
>
|
>
|
||||||
{{if disclosure.expanded 'Close' 'Open'}}
|
{{if disclosure.expanded 'Close' 'Open'}}
|
||||||
</disclosure.Action>
|
</disclosure.Action>
|
||||||
<disclosure.Menu as |menu|>
|
<disclosure.Menu as |panel|>
|
||||||
|
<panel.Menu as |menu|>
|
||||||
<menu.Item>
|
<menu.Item>
|
||||||
<menu.Action>Item 1</menu.Action>
|
<menu.Action>Item 1</menu.Action>
|
||||||
</menu.Item>
|
</menu.Item>
|
||||||
<menu.Item>
|
<menu.Item>
|
||||||
<menu.Action>Item 2</menu.Action>
|
<menu.Action>Item 2</menu.Action>
|
||||||
</menu.Item>
|
</menu.Item>
|
||||||
|
</panel.Menu>
|
||||||
</disclosure.Menu>
|
</disclosure.Menu>
|
||||||
</DisclosureMenu>
|
</DisclosureMenu>
|
||||||
</figure>
|
</figure>
|
||||||
|
@ -46,13 +48,15 @@ common usecase of having a floating menu.
|
||||||
(array 'top' this.height)
|
(array 'top' this.height)
|
||||||
(array 'background-color' 'rgb(var(--tone-gray-000))')
|
(array 'background-color' 'rgb(var(--tone-gray-000))')
|
||||||
}}
|
}}
|
||||||
as |menu|>
|
as |panel|>
|
||||||
|
<panel.Menu as |menu|>
|
||||||
<menu.Item>
|
<menu.Item>
|
||||||
<menu.Action>Item 1</menu.Action>
|
<menu.Action>Item 1</menu.Action>
|
||||||
</menu.Item>
|
</menu.Item>
|
||||||
<menu.Item>
|
<menu.Item>
|
||||||
<menu.Action>Item 2</menu.Action>
|
<menu.Action>Item 2</menu.Action>
|
||||||
</menu.Item>
|
</menu.Item>
|
||||||
|
</panel.Menu>
|
||||||
</disclosure.Menu>
|
</disclosure.Menu>
|
||||||
</DisclosureMenu>
|
</DisclosureMenu>
|
||||||
</figure>
|
</figure>
|
||||||
|
|
|
@ -4,12 +4,16 @@
|
||||||
}}
|
}}
|
||||||
...attributes
|
...attributes
|
||||||
>
|
>
|
||||||
<Disclosure as |disclosure|>
|
<Disclosure
|
||||||
|
@expanded={{@expanded}}
|
||||||
|
as |disclosure|>
|
||||||
{{yield (hash
|
{{yield (hash
|
||||||
Action=(component 'disclosure-menu/action' disclosure=disclosure)
|
Action=(component 'disclosure-menu/action' disclosure=disclosure)
|
||||||
Menu=(component 'disclosure-menu/menu' disclosure=disclosure)
|
Menu=(component 'disclosure-menu/menu' disclosure=disclosure)
|
||||||
disclosure=disclosure
|
disclosure=disclosure
|
||||||
toggle=disclosure.toggle
|
toggle=disclosure.toggle
|
||||||
|
close=disclosure.close
|
||||||
|
open=disclosure.open
|
||||||
expanded=disclosure.expanded
|
expanded=disclosure.expanded
|
||||||
)}}
|
)}}
|
||||||
</Disclosure>
|
</Disclosure>
|
||||||
|
|
|
@ -1,3 +1,6 @@
|
||||||
.disclosure-menu {
|
.disclosure-menu {
|
||||||
position: relative;
|
position: relative;
|
||||||
}
|
}
|
||||||
|
.disclosure-menu [aria-expanded] ~ * {
|
||||||
|
@extend %menu-panel;
|
||||||
|
}
|
||||||
|
|
|
@ -1,15 +1,11 @@
|
||||||
<@disclosure.Details as |details|>
|
<@disclosure.Details as |details|>
|
||||||
<Menu
|
<div
|
||||||
{{on-outside 'click' @disclosure.close}}
|
{{on-outside 'click' @disclosure.close}}
|
||||||
@disclosure={{@disclosure}}
|
|
||||||
...attributes
|
...attributes
|
||||||
as |menu|>
|
>
|
||||||
{{yield (hash
|
{{yield (hash
|
||||||
items=menu.items
|
Menu=(component 'menu' disclosure=@disclosure)
|
||||||
Item=menu.Item
|
|
||||||
Action=menu.Action
|
|
||||||
Separator=menu.Separator
|
|
||||||
)}}
|
)}}
|
||||||
</Menu>
|
</div>
|
||||||
</@disclosure.Details>
|
</@disclosure.Details>
|
||||||
|
|
||||||
|
|
|
@ -209,6 +209,30 @@ An `<Action />` component with the correct aria attributes added.
|
||||||
| `id` | `String` | A unique id which you **should** (for aria reasons) use for the root DOM element you are controlling with the disclosure |
|
| `id` | `String` | A unique id which you **should** (for aria reasons) use for the root DOM element you are controlling with the disclosure |
|
||||||
| `expanded` | `Boolean` | An alias of `disclosure.expanded`. Whether the disclosure is 'expanded' or not. If disclosure of the `Details` is controlled via CSS you **should** use this to set/unset `aria-hidden` |
|
| `expanded` | `Boolean` | An alias of `disclosure.expanded`. Whether the disclosure is 'expanded' or not. If disclosure of the `Details` is controlled via CSS you **should** use this to set/unset `aria-hidden` |
|
||||||
|
|
||||||
|
## Internal States
|
||||||
|
|
||||||
|
Opened and closed states of the Disclosure are managed internally by a simple boolean state machine:
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
stateDiagram-v2
|
||||||
|
[*] --> false
|
||||||
|
true --> false: TOGGLE
|
||||||
|
true --> false: FALSE
|
||||||
|
false --> true: TOGGLE
|
||||||
|
false --> true: TRUE
|
||||||
|
```
|
||||||
|
|
||||||
|
which in the context of the Disclosure component is better represented via:
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
stateDiagram-v2
|
||||||
|
[*] --> closed
|
||||||
|
opened --> closed: TOGGLE
|
||||||
|
opened --> closed: CLOSE
|
||||||
|
closed --> opened: TOGGLE
|
||||||
|
closed --> opened: OPEN
|
||||||
|
```
|
||||||
|
|
||||||
## See
|
## See
|
||||||
|
|
||||||
- [Component Source Code](./index.js)
|
- [Component Source Code](./index.js)
|
||||||
|
|
|
@ -87,53 +87,12 @@
|
||||||
|
|
||||||
<:main-nav>
|
<:main-nav>
|
||||||
<ul>
|
<ul>
|
||||||
<li
|
<Consul::Datacenter::Selector
|
||||||
class="dcs"
|
@dc={{@dc}}
|
||||||
data-test-datacenter-menu
|
@partition={{@partition}}
|
||||||
>
|
@nspace={{@nspace}}
|
||||||
<PopoverMenu
|
@dcs={{@dcs}}
|
||||||
aria-label="Datacenter"
|
|
||||||
@position="left"
|
|
||||||
as |components|>
|
|
||||||
<BlockSlot @name="trigger">
|
|
||||||
{{@dc.Name}}
|
|
||||||
</BlockSlot>
|
|
||||||
<BlockSlot @name="menu">
|
|
||||||
{{#let components.MenuItem components.MenuSeparator as |MenuItem MenuSeparator|}}
|
|
||||||
<DataSource
|
|
||||||
@src={{uri '/*/*/*/datacenters'}}
|
|
||||||
@onchange={{action (mut @dcs) value="data"}}
|
|
||||||
@loading="lazy"
|
|
||||||
/>
|
/>
|
||||||
{{#each (sort-by 'Name' @dcs) as |item|}}
|
|
||||||
<MenuItem
|
|
||||||
data-test-datacenter-picker
|
|
||||||
class={{concat
|
|
||||||
(if (eq @dc.Name item.Name) 'is-active')
|
|
||||||
(if item.Local ' is-local')
|
|
||||||
(if item.Primary ' is-primary')
|
|
||||||
}}
|
|
||||||
@href={{href-to '.' params=(hash
|
|
||||||
dc=item.Name
|
|
||||||
partition=undefined
|
|
||||||
nspace=(if (gt @nspace.length 0) @nspace undefined)
|
|
||||||
)}}
|
|
||||||
>
|
|
||||||
<BlockSlot @name="label">
|
|
||||||
{{item.Name}}
|
|
||||||
{{#if item.Primary}}
|
|
||||||
<span>Primary</span>
|
|
||||||
{{/if}}
|
|
||||||
{{#if item.Local}}
|
|
||||||
<span>Local</span>
|
|
||||||
{{/if}}
|
|
||||||
</BlockSlot>
|
|
||||||
</MenuItem>
|
|
||||||
{{/each}}
|
|
||||||
{{/let}}
|
|
||||||
</BlockSlot>
|
|
||||||
</PopoverMenu>
|
|
||||||
</li>
|
|
||||||
<Consul::Partition::Selector
|
<Consul::Partition::Selector
|
||||||
@dc={{@dc}}
|
@dc={{@dc}}
|
||||||
@partition={{@partition}}
|
@partition={{@partition}}
|
||||||
|
@ -182,45 +141,52 @@
|
||||||
<li
|
<li
|
||||||
data-test-main-nav-help
|
data-test-main-nav-help
|
||||||
>
|
>
|
||||||
<PopoverMenu @position="right" as |components|>
|
<DisclosureMenu
|
||||||
<BlockSlot @name="trigger">
|
as |disclosure|>
|
||||||
|
<disclosure.Action
|
||||||
|
{{on 'click' disclosure.toggle}}
|
||||||
|
>
|
||||||
Help
|
Help
|
||||||
</BlockSlot>
|
</disclosure.Action>
|
||||||
<BlockSlot @name="menu">
|
<disclosure.Menu as |panel|>
|
||||||
{{#let components.MenuItem components.MenuSeparator as |MenuItem MenuSeparator|}}
|
<panel.Menu as |menu|>
|
||||||
<MenuSeparator>
|
<menu.Separator>
|
||||||
<BlockSlot @name="label">
|
|
||||||
Consul v{{env 'CONSUL_VERSION'}}
|
Consul v{{env 'CONSUL_VERSION'}}
|
||||||
</BlockSlot>
|
</menu.Separator>
|
||||||
</MenuSeparator>
|
<menu.Item
|
||||||
<MenuItem
|
|
||||||
class="docs-link"
|
class="docs-link"
|
||||||
|
>
|
||||||
|
<menu.Action
|
||||||
@href={{env 'CONSUL_DOCS_URL'}}
|
@href={{env 'CONSUL_DOCS_URL'}}
|
||||||
|
@external={{true}}
|
||||||
>
|
>
|
||||||
<BlockSlot @name="label">
|
|
||||||
Documentation
|
Documentation
|
||||||
</BlockSlot>
|
</menu.Action>
|
||||||
</MenuItem>
|
</menu.Item>
|
||||||
<MenuItem
|
<menu.Item
|
||||||
class="learn-link"
|
class="learn-link"
|
||||||
|
>
|
||||||
|
<menu.Action
|
||||||
@href={{concat (env 'CONSUL_DOCS_LEARN_URL') '/consul'}}
|
@href={{concat (env 'CONSUL_DOCS_LEARN_URL') '/consul'}}
|
||||||
|
@external={{true}}
|
||||||
>
|
>
|
||||||
<BlockSlot @name="label">
|
|
||||||
HashiCorp Learn
|
HashiCorp Learn
|
||||||
</BlockSlot>
|
</menu.Action>
|
||||||
</MenuItem>
|
</menu.Item>
|
||||||
<MenuSeparator />
|
<menu.Separator />
|
||||||
<MenuItem
|
<menu.Item
|
||||||
class="learn-link"
|
class="feedback-link"
|
||||||
@href={{env 'CONSUL_REPO_ISSUES_URL'}}
|
>
|
||||||
|
<menu.Action
|
||||||
|
@href={{env 'CONSUL_REPO_ISSUES_URL'}}
|
||||||
|
@external={{true}}
|
||||||
>
|
>
|
||||||
<BlockSlot @name="label">
|
|
||||||
Provide Feedback
|
Provide Feedback
|
||||||
</BlockSlot>
|
</menu.Action>
|
||||||
</MenuItem>
|
</menu.Item>
|
||||||
{{/let}}
|
</panel.Menu>
|
||||||
</BlockSlot>
|
</disclosure.Menu>
|
||||||
</PopoverMenu>
|
</DisclosureMenu>
|
||||||
</li>
|
</li>
|
||||||
<li
|
<li
|
||||||
data-test-main-nav-settings
|
data-test-main-nav-settings
|
||||||
|
|
|
@ -1,11 +1,18 @@
|
||||||
%hashicorp-consul {
|
%hashicorp-consul {
|
||||||
[role='banner'] nav .dcs {
|
nav .dcs {
|
||||||
@extend %main-nav-vertical-hoisted;
|
@extend %main-nav-vertical-hoisted;
|
||||||
left: 100px;
|
left: 100px;
|
||||||
}
|
}
|
||||||
[role='banner'] nav .dcs .popover-menu[aria-label]::before {
|
nav .dcs .menu-panel {
|
||||||
display: none;
|
min-width: 250px;
|
||||||
}
|
}
|
||||||
|
nav li.partitions,
|
||||||
|
nav li.nspaces {
|
||||||
|
@extend %main-nav-vertical-popover-menu;
|
||||||
|
/* --panel-height: 300px;
|
||||||
|
--row-height: 43px; */
|
||||||
|
}
|
||||||
|
|
||||||
[role='banner'] a svg {
|
[role='banner'] a svg {
|
||||||
fill: rgb(var(--tone-brand-600));
|
fill: rgb(var(--tone-brand-600));
|
||||||
}
|
}
|
||||||
|
|
|
@ -50,7 +50,7 @@ export default (collection, clickable, attribute, is, authForm, emptyState) => s
|
||||||
':checked',
|
':checked',
|
||||||
'[data-test-nspace-menu] > input[type="checkbox"]'
|
'[data-test-nspace-menu] > input[type="checkbox"]'
|
||||||
);
|
);
|
||||||
page.navigation.dcs = collection('[data-test-datacenter-picker]', {
|
page.navigation.dcs = collection('[data-test-datacenter-menu] li', {
|
||||||
name: clickable('a'),
|
name: clickable('a'),
|
||||||
});
|
});
|
||||||
return page;
|
return page;
|
||||||
|
|
|
@ -5,6 +5,7 @@
|
||||||
%main-nav-horizontal > ul > li > a,
|
%main-nav-horizontal > ul > li > a,
|
||||||
%main-nav-horizontal > ul > li > span,
|
%main-nav-horizontal > ul > li > span,
|
||||||
%main-nav-horizontal > ul > li > button,
|
%main-nav-horizontal > ul > li > button,
|
||||||
|
%main-nav-horizontal-popover-menu-trigger,
|
||||||
%main-nav-horizontal > ul > li > .popover-menu > label > button {
|
%main-nav-horizontal > ul > li > .popover-menu > label > button {
|
||||||
@extend %main-nav-horizontal-action;
|
@extend %main-nav-horizontal-action;
|
||||||
}
|
}
|
||||||
|
|
|
@ -5,6 +5,15 @@
|
||||||
%main-nav-horizontal-action > a {
|
%main-nav-horizontal-action > a {
|
||||||
color: inherit;
|
color: inherit;
|
||||||
}
|
}
|
||||||
|
%main-nav-horizontal-popover-menu-trigger::after {
|
||||||
|
@extend %with-chevron-down-mask, %as-pseudo;
|
||||||
|
width: 16px;
|
||||||
|
height: 16px;
|
||||||
|
position: relative;
|
||||||
|
}
|
||||||
|
%main-nav-horizontal-popover-menu-trigger[aria-expanded='true']::after {
|
||||||
|
@extend %with-chevron-up-mask;
|
||||||
|
}
|
||||||
/**/
|
/**/
|
||||||
/* reduced size hamburger menu */
|
/* reduced size hamburger menu */
|
||||||
%main-nav-horizontal-toggle {
|
%main-nav-horizontal-toggle {
|
||||||
|
|
|
@ -30,34 +30,12 @@
|
||||||
%main-nav-vertical > ul > li > label {
|
%main-nav-vertical > ul > li > label {
|
||||||
@extend %main-nav-vertical-action;
|
@extend %main-nav-vertical-action;
|
||||||
}
|
}
|
||||||
/**/
|
|
||||||
|
|
||||||
%main-nav-vertical .popover-menu {
|
|
||||||
margin-top: 0.5rem;
|
|
||||||
}
|
|
||||||
%main-nav-vertical .popover-menu .menu-panel {
|
|
||||||
top: 37px !important;
|
|
||||||
border-top-left-radius: 0;
|
|
||||||
border-top-right-radius: 0;
|
|
||||||
}
|
|
||||||
%main-nav-vertical .popover-menu > label > button {
|
|
||||||
border: var(--decor-border-100);
|
|
||||||
border-color: rgb(var(--tone-gray-500));
|
|
||||||
color: rgb(var(--tone-gray-999));
|
|
||||||
width: calc(100% - 20px);
|
|
||||||
z-index: 100;
|
|
||||||
text-align: left;
|
|
||||||
padding: 10px;
|
|
||||||
border-radius: var(--decor-radius-100);
|
|
||||||
}
|
|
||||||
%main-nav-vertical .popover-menu > label > button::after {
|
|
||||||
float: right;
|
|
||||||
}
|
|
||||||
%main-nav-vertical .popover-menu .menu-panel {
|
|
||||||
top: 28px;
|
|
||||||
z-index: 100;
|
|
||||||
}
|
|
||||||
/* menu-panels in the main navigation are treated slightly differently */
|
/* menu-panels in the main navigation are treated slightly differently */
|
||||||
%main-nav-vertical label + div {
|
%main-nav-vertical-popover-menu .disclosure-menu button + * {
|
||||||
@extend %main-nav-vertical-menu-panel;
|
@extend %main-nav-vertical-menu-panel;
|
||||||
}
|
}
|
||||||
|
/**/
|
||||||
|
%main-nav-vertical-popover-menu .disclosure-menu > button {
|
||||||
|
@extend %main-nav-vertical-popover-menu-trigger;
|
||||||
|
@extend %internal-button;
|
||||||
|
}
|
||||||
|
|
|
@ -11,14 +11,13 @@
|
||||||
%main-nav-vertical:not(.in-viewport) {
|
%main-nav-vertical:not(.in-viewport) {
|
||||||
visibility: hidden;
|
visibility: hidden;
|
||||||
}
|
}
|
||||||
%main-nav-vertical li.partitions,
|
|
||||||
%main-nav-vertical li.partition,
|
%main-nav-vertical li.partition,
|
||||||
|
%main-nav-vertical li.partitions,
|
||||||
%main-nav-vertical li.nspaces {
|
%main-nav-vertical li.nspaces {
|
||||||
margin-bottom: 25px;
|
margin-bottom: 25px;
|
||||||
padding: 0 26px;
|
padding: 0 26px;
|
||||||
}
|
}
|
||||||
%main-nav-vertical li.dcs {
|
%main-nav-vertical li.dcs {
|
||||||
margin-bottom: 18px;
|
|
||||||
padding: 0 18px;
|
padding: 0 18px;
|
||||||
}
|
}
|
||||||
// TODO: We no longer have the rule that menu-panel buttons only contain two
|
// TODO: We no longer have the rule that menu-panel buttons only contain two
|
||||||
|
@ -41,9 +40,21 @@
|
||||||
margin-top: 0.7rem;
|
margin-top: 0.7rem;
|
||||||
padding-bottom: 0;
|
padding-bottom: 0;
|
||||||
}
|
}
|
||||||
|
%main-nav-vertical-popover-menu .disclosure {
|
||||||
|
position: relative;
|
||||||
|
}
|
||||||
|
%main-nav-vertical-popover-menu-trigger {
|
||||||
|
width: 100%;
|
||||||
|
text-align: left;
|
||||||
|
padding: 10px;
|
||||||
|
}
|
||||||
|
%main-nav-vertical-popover-menu-trigger::after {
|
||||||
|
float: right;
|
||||||
|
}
|
||||||
%main-nav-vertical-menu-panel {
|
%main-nav-vertical-menu-panel {
|
||||||
min-width: 248px;
|
position: absolute;
|
||||||
|
z-index: 1;
|
||||||
|
width: calc(100% - 2px);
|
||||||
}
|
}
|
||||||
%main-nav-vertical-hoisted {
|
%main-nav-vertical-hoisted {
|
||||||
visibility: visible;
|
visibility: visible;
|
||||||
|
|
|
@ -1,8 +1,8 @@
|
||||||
%main-nav-vertical-action {
|
%main-nav-vertical-action {
|
||||||
|
@extend %p1;
|
||||||
cursor: pointer;
|
cursor: pointer;
|
||||||
border-right: var(--decor-border-400);
|
border-right: var(--decor-border-400);
|
||||||
border-color: var(--transparent);
|
border-color: var(--transparent);
|
||||||
@extend %p1;
|
|
||||||
}
|
}
|
||||||
%main-nav-vertical-action > a {
|
%main-nav-vertical-action > a {
|
||||||
color: inherit;
|
color: inherit;
|
||||||
|
@ -41,28 +41,38 @@
|
||||||
background-color: rgb(var(--tone-gray-150));
|
background-color: rgb(var(--tone-gray-150));
|
||||||
border-color: rgb(var(--tone-gray-999));
|
border-color: rgb(var(--tone-gray-999));
|
||||||
}
|
}
|
||||||
%main-nav-vertical li[aria-label]::before,
|
%main-nav-vertical [aria-label]::before {
|
||||||
%main-nav-vertical .popover-menu[aria-label]::before {
|
|
||||||
color: rgb(var(--tone-gray-700));
|
color: rgb(var(--tone-gray-700));
|
||||||
content: attr(aria-label);
|
content: attr(aria-label);
|
||||||
display: block;
|
display: block;
|
||||||
margin-top: -0.5rem;
|
margin-top: -0.5rem;
|
||||||
margin-bottom: 0.5rem;
|
margin-bottom: 0.5rem;
|
||||||
}
|
}
|
||||||
%main-nav-vertical .is-primary span,
|
%main-nav-vertical-popover-menu-trigger {
|
||||||
%main-nav-vertical .is-local span {
|
border: var(--decor-border-100);
|
||||||
@extend %pill-200;
|
border-color: rgb(var(--tone-gray-500));
|
||||||
color: rgb(var(--tone-gray-000));
|
border-radius: var(--decor-radius-100);
|
||||||
background-color: rgb(var(--tone-gray-500));
|
|
||||||
}
|
font-weight: inherit;
|
||||||
%main-nav-vertical .nspaces .menu-panel > div {
|
|
||||||
background-color: rgb(var(--tone-gray-050));
|
background-color: rgb(var(--tone-gray-050));
|
||||||
color: rgb(var(--tone-gray-999));
|
color: rgb(var(--tone-gray-999));
|
||||||
padding-left: 36px;
|
|
||||||
}
|
}
|
||||||
%main-nav-vertical .nspaces .menu-panel > div::before {
|
%main-nav-vertical-popover-menu-trigger[aria-expanded='true'] {
|
||||||
@extend %with-info-circle-fill-mask, %as-pseudo;
|
border-bottom-left-radius: var(--decor-radius-000);
|
||||||
color: rgb(var(--tone-blue-500));
|
border-bottom-right-radius: var(--decor-radius-000);
|
||||||
/* sizes the icon not the text */
|
}
|
||||||
font-size: 1.1em;
|
%main-nav-vertical-popover-menu-trigger::after {
|
||||||
|
@extend %with-chevron-down-mask, %as-pseudo;
|
||||||
|
width: 16px;
|
||||||
|
height: 16px;
|
||||||
|
position: relative;
|
||||||
|
}
|
||||||
|
%main-nav-vertical-popover-menu-trigger[aria-expanded='true']::after {
|
||||||
|
@extend %with-chevron-up-mask;
|
||||||
|
}
|
||||||
|
%main-nav-vertical-menu-panel {
|
||||||
|
border-top-left-radius: var(--decor-radius-000);
|
||||||
|
border-top-right-radius: var(--decor-radius-000);
|
||||||
|
border-top: var(--decor-border-000);
|
||||||
}
|
}
|
||||||
|
|
|
@ -0,0 +1,169 @@
|
||||||
|
# MenuPanel
|
||||||
|
|
||||||
|
```hbs preview-template
|
||||||
|
|
||||||
|
{{#each
|
||||||
|
(array 'light' 'dark')
|
||||||
|
as |theme|}}
|
||||||
|
<figure>
|
||||||
|
<figcaption>Without a header</figcaption>
|
||||||
|
<div
|
||||||
|
class={{class-map
|
||||||
|
'menu-panel'
|
||||||
|
(array (concat 'theme-' theme))
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
<ul role="menu">
|
||||||
|
<li aria-current="true" role="none">
|
||||||
|
<Action role="menuitem">Item 1<span>Label</span><span>Label 2</span></Action>
|
||||||
|
</li>
|
||||||
|
<li role="separator">
|
||||||
|
Item some title text
|
||||||
|
</li>
|
||||||
|
<li role="none">
|
||||||
|
<Action role="menuitem">Item 2</Action>
|
||||||
|
</li>
|
||||||
|
<li role="separator"></li>
|
||||||
|
<li role="none">
|
||||||
|
<Action role="menuitem">Item 3</Action>
|
||||||
|
</li>
|
||||||
|
</ul>
|
||||||
|
</div>
|
||||||
|
</figure>
|
||||||
|
<figure>
|
||||||
|
<figcaption>With a header</figcaption>
|
||||||
|
<div
|
||||||
|
class={{class-map
|
||||||
|
'menu-panel'
|
||||||
|
(array (concat 'theme-' theme))
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
<div>
|
||||||
|
<p>Some content explaining what the menu is about</p>
|
||||||
|
</div>
|
||||||
|
<ul role="menu">
|
||||||
|
<li aria-current="true" role="none">
|
||||||
|
<Action role="menuitem">Item 1<span>Label</span><span>Label 2</span></Action>
|
||||||
|
</li>
|
||||||
|
<li role="separator">
|
||||||
|
Item some title text
|
||||||
|
</li>
|
||||||
|
<li role="none">
|
||||||
|
<Action role="menuitem">Item 2</Action>
|
||||||
|
</li>
|
||||||
|
<li role="separator"></li>
|
||||||
|
<li role="none">
|
||||||
|
<Action role="menuitem">Item 3</Action>
|
||||||
|
</li>
|
||||||
|
</ul>
|
||||||
|
</div>
|
||||||
|
</figure>
|
||||||
|
|
||||||
|
<figure>
|
||||||
|
<StateChart
|
||||||
|
@src={{state-chart 'boolean'}}
|
||||||
|
as |State Guard StateAction dispatch state|>
|
||||||
|
<Action>Focus Left</Action>
|
||||||
|
<DisclosureMenu as |disclosure|>
|
||||||
|
<disclosure.Action
|
||||||
|
{{on 'click' disclosure.toggle}}
|
||||||
|
>
|
||||||
|
{{if disclosure.expanded 'Close' 'Open'}}
|
||||||
|
</disclosure.Action>
|
||||||
|
<disclosure.Menu
|
||||||
|
style={{style-map
|
||||||
|
(array 'max-height' (if (state-matches state 'true') (add 0 this.rect.height)) 'px')
|
||||||
|
}}
|
||||||
|
class={{class-map
|
||||||
|
(array 'menu-panel')
|
||||||
|
(array 'menu-panel-confirming' (state-matches state 'true'))
|
||||||
|
(array (concat 'theme-' theme))
|
||||||
|
}}
|
||||||
|
|
||||||
|
as |panel|>
|
||||||
|
<div
|
||||||
|
{{on-resize
|
||||||
|
(dom-position (set this 'header'))
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
<p>Some text in here</p>
|
||||||
|
</div>
|
||||||
|
<panel.Menu as |menu|>
|
||||||
|
<menu.Item
|
||||||
|
aria-current="true"
|
||||||
|
>
|
||||||
|
<menu.Action>
|
||||||
|
Item 1
|
||||||
|
<span>Label</span>
|
||||||
|
<span>Label 2</span>
|
||||||
|
</menu.Action>
|
||||||
|
</menu.Item>
|
||||||
|
<menu.Separator>
|
||||||
|
Item some title text
|
||||||
|
</menu.Separator>
|
||||||
|
<menu.Item>
|
||||||
|
<menu.Action>
|
||||||
|
Item 2
|
||||||
|
</menu.Action>
|
||||||
|
</menu.Item>
|
||||||
|
<menu.Separator />
|
||||||
|
<menu.Item
|
||||||
|
class="dangerous"
|
||||||
|
>
|
||||||
|
<menu.Action
|
||||||
|
{{on "click" (fn dispatch 'TOGGLE')}}
|
||||||
|
>
|
||||||
|
Item 3
|
||||||
|
</menu.Action>
|
||||||
|
|
||||||
|
<div
|
||||||
|
{{on-resize
|
||||||
|
(dom-position (set this 'rect'))
|
||||||
|
}}
|
||||||
|
style={{style-map
|
||||||
|
(array 'top' (if (state-matches state 'true') (sub 0 this.header.height)) 'px')
|
||||||
|
}}
|
||||||
|
class={{class-map
|
||||||
|
'menu-panel-confirmation'
|
||||||
|
'informed-action'
|
||||||
|
'confirmation-alert'
|
||||||
|
'warning'
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
<div>
|
||||||
|
<header>Hi</header>
|
||||||
|
<p>Body</p>
|
||||||
|
</div>
|
||||||
|
<ul>
|
||||||
|
<li>
|
||||||
|
<Action
|
||||||
|
@tabindex="-1"
|
||||||
|
{{on "click" (queue disclosure.close (fn dispatch 'TOGGLE'))}}
|
||||||
|
>
|
||||||
|
Confirm
|
||||||
|
</Action>
|
||||||
|
</li>
|
||||||
|
<li>
|
||||||
|
<Action
|
||||||
|
@tabindex="-1"
|
||||||
|
{{on "click" (fn dispatch 'TOGGLE')}}
|
||||||
|
>
|
||||||
|
Cancel
|
||||||
|
</Action>
|
||||||
|
</li>
|
||||||
|
</ul>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
</menu.Item>
|
||||||
|
</panel.Menu>
|
||||||
|
|
||||||
|
</disclosure.Menu>
|
||||||
|
|
||||||
|
</DisclosureMenu>
|
||||||
|
|
||||||
|
<Action>Focus Right</Action>
|
||||||
|
</StateChart>
|
||||||
|
</figure>
|
||||||
|
|
||||||
|
{{/each}}
|
||||||
|
```
|
|
@ -0,0 +1,58 @@
|
||||||
|
/* old stuff */
|
||||||
|
|
||||||
|
%menu-panel {
|
||||||
|
overflow: hidden;
|
||||||
|
}
|
||||||
|
%menu-panel-deprecated {
|
||||||
|
position: absolute;
|
||||||
|
}
|
||||||
|
%menu-panel-deprecated [type='checkbox'] {
|
||||||
|
display: none;
|
||||||
|
}
|
||||||
|
%menu-panel-deprecated {
|
||||||
|
transition: max-height 150ms;
|
||||||
|
}
|
||||||
|
%menu-panel-deprecated {
|
||||||
|
transition: min-height 150ms, max-height 150ms;
|
||||||
|
min-height: 0;
|
||||||
|
}
|
||||||
|
%menu-panel-deprecated:not(.confirmation) [type='checkbox'] ~ * {
|
||||||
|
transition: transform 150ms;
|
||||||
|
}
|
||||||
|
%menu-panel-deprecated [type='checkbox']:checked ~ * {
|
||||||
|
transform: translateX(calc(-100% - 10px));
|
||||||
|
}
|
||||||
|
%menu-panel-deprecated.confirmation [role='menu'] {
|
||||||
|
min-height: 205px !important;
|
||||||
|
}
|
||||||
|
%menu-panel-deprecated [type='checkbox']:checked ~ * {
|
||||||
|
/* this needs to autocalculate */
|
||||||
|
min-height: 143px;
|
||||||
|
max-height: 143px;
|
||||||
|
}
|
||||||
|
%menu-panel-deprecated [id$='-']:first-child:checked ~ ul label[for$='-'] * [role='menu'],
|
||||||
|
%menu-panel-deprecated [id$='-']:first-child:checked ~ ul > li > [role='menu'] {
|
||||||
|
display: block;
|
||||||
|
}
|
||||||
|
/**/
|
||||||
|
%menu-panel-deprecated > ul > li > div[role='menu'] {
|
||||||
|
position: absolute;
|
||||||
|
top: 0;
|
||||||
|
left: calc(100% + 10px);
|
||||||
|
}
|
||||||
|
%menu-panel-deprecated > ul > li > *:not(div[role='menu']) {
|
||||||
|
position: relative;
|
||||||
|
}
|
||||||
|
%menu-panel-deprecated:not(.left) {
|
||||||
|
right: 0px !important;
|
||||||
|
left: auto !important;
|
||||||
|
}
|
||||||
|
%menu-panel-deprecated.left {
|
||||||
|
left: 0px;
|
||||||
|
}
|
||||||
|
%menu-panel-deprecated:not(.above) {
|
||||||
|
top: 28px;
|
||||||
|
}
|
||||||
|
%menu-panel-deprecated.above {
|
||||||
|
bottom: 42px;
|
||||||
|
}
|
|
@ -3,11 +3,12 @@
|
||||||
change=(action "change")
|
change=(action "change")
|
||||||
) as |api|}}
|
) as |api|}}
|
||||||
<div
|
<div
|
||||||
class={{join ' ' (compact (array
|
class={{class-map
|
||||||
'menu-panel'
|
(array 'menu-panel')
|
||||||
position
|
(array 'menu-panel-deprecated')
|
||||||
(if isConfirmation 'confirmation')
|
(array position)
|
||||||
))}}
|
(array isConfirmation 'confirmation')
|
||||||
|
}}
|
||||||
{{did-insert (action 'connect')}}
|
{{did-insert (action 'connect')}}
|
||||||
>
|
>
|
||||||
<YieldSlot @name="controls">
|
<YieldSlot @name="controls">
|
||||||
|
|
|
@ -1,22 +1,50 @@
|
||||||
@import './skin';
|
@import './skin';
|
||||||
@import './layout';
|
@import './layout';
|
||||||
|
@import './deprecated';
|
||||||
|
|
||||||
.menu-panel {
|
.menu-panel {
|
||||||
@extend %menu-panel;
|
@extend %menu-panel;
|
||||||
}
|
}
|
||||||
|
.menu-panel-deprecated {
|
||||||
|
@extend %menu-panel-deprecated;
|
||||||
|
}
|
||||||
|
%menu-panel {
|
||||||
|
@extend %panel;
|
||||||
|
}
|
||||||
|
%menu-panel-item span {
|
||||||
|
@extend %menu-panel-badge;
|
||||||
|
}
|
||||||
%menu-panel [role='separator'] {
|
%menu-panel [role='separator'] {
|
||||||
|
@extend %panel-separator;
|
||||||
@extend %menu-panel-separator;
|
@extend %menu-panel-separator;
|
||||||
}
|
}
|
||||||
%menu-panel > div {
|
%menu-panel > div {
|
||||||
@extend %menu-panel-header;
|
@extend %menu-panel-header;
|
||||||
}
|
}
|
||||||
// %menu-panel > ul > li > *:not(div),
|
%menu-panel > ul {
|
||||||
%menu-panel [role='menuitem'] {
|
@extend %menu-panel-body;
|
||||||
|
}
|
||||||
|
%menu-panel-body > li {
|
||||||
|
@extend %menu-panel-item;
|
||||||
|
}
|
||||||
|
%menu-panel-body > [role='treeitem'],
|
||||||
|
%menu-panel-body > li > [role='menuitem'],
|
||||||
|
%menu-panel-body > li > [role='option'] {
|
||||||
|
@extend %menu-panel-button;
|
||||||
|
}
|
||||||
|
%menu-panel-button + * {
|
||||||
|
@extend %menu-panel-confirmation;
|
||||||
|
}
|
||||||
|
%menu-panel-item[aria-selected] > *,
|
||||||
|
%menu-panel-item[aria-checked] > *,
|
||||||
|
%menu-panel-item[aria-current] > *,
|
||||||
|
%menu-panel-item.is-active > * {
|
||||||
|
@extend %menu-panel-button-selected;
|
||||||
|
}
|
||||||
|
%menu-panel-button {
|
||||||
@extend %internal-button;
|
@extend %internal-button;
|
||||||
}
|
}
|
||||||
%menu-panel > ul > li.dangerous > *:not(div) {
|
/* first-child is highly likely to be the button/or anchor*/
|
||||||
|
%menu-panel-item.dangerous > *:first-child {
|
||||||
@extend %internal-button-dangerous;
|
@extend %internal-button-dangerous;
|
||||||
}
|
}
|
||||||
%menu-panel .informed-action {
|
|
||||||
border: 0 !important;
|
|
||||||
}
|
|
||||||
|
|
|
@ -1,115 +1,7 @@
|
||||||
%menu-panel {
|
|
||||||
position: absolute;
|
|
||||||
}
|
|
||||||
%menu-panel [type='checkbox'] {
|
|
||||||
display: none;
|
|
||||||
}
|
|
||||||
%menu-panel {
|
|
||||||
overflow: hidden;
|
|
||||||
transition: min-height 150ms, max-height 150ms;
|
|
||||||
min-height: 0;
|
|
||||||
}
|
|
||||||
%menu-panel:not(.confirmation) [type='checkbox'] ~ * {
|
|
||||||
transition: transform 150ms;
|
|
||||||
}
|
|
||||||
%menu-panel [type='checkbox']:checked ~ * {
|
|
||||||
transform: translateX(calc(-100% - 10px));
|
|
||||||
}
|
|
||||||
%menu-panel.confirmation [role='menu'] {
|
|
||||||
min-height: 200px !important;
|
|
||||||
}
|
|
||||||
%menu-panel [role='menuitem'] {
|
|
||||||
display: flex;
|
|
||||||
justify-content: space-between;
|
|
||||||
}
|
|
||||||
%menu-panel [role='menuitem']:after {
|
|
||||||
@extend %as-pseudo;
|
|
||||||
display: block !important;
|
|
||||||
background-position: center right !important;
|
|
||||||
}
|
|
||||||
%menu-panel-sub-panel {
|
|
||||||
position: absolute;
|
|
||||||
top: 0;
|
|
||||||
left: calc(100% + 10px);
|
|
||||||
display: none;
|
|
||||||
}
|
|
||||||
/* TODO: once everything is using ListCollection */
|
|
||||||
/* this can go */
|
|
||||||
%menu-panel [type='checkbox']:checked ~ * {
|
|
||||||
/* this needs to autocalculate */
|
|
||||||
min-height: 143px;
|
|
||||||
max-height: 143px;
|
|
||||||
}
|
|
||||||
%menu-panel [id$='-']:first-child:checked ~ ul label[for$='-'] * [role='menu'],
|
|
||||||
%menu-panel [id$='-']:first-child:checked ~ ul > li > [role='menu'] {
|
|
||||||
display: block;
|
|
||||||
}
|
|
||||||
/**/
|
|
||||||
%menu-panel > ul > li > div[role='menu'] {
|
|
||||||
@extend %menu-panel-sub-panel;
|
|
||||||
}
|
|
||||||
%menu-panel > ul > li > *:not(div[role='menu']) {
|
|
||||||
position: relative;
|
|
||||||
}
|
|
||||||
%menu-panel:not(.left) {
|
|
||||||
right: 0px;
|
|
||||||
left: auto;
|
|
||||||
}
|
|
||||||
%menu-panel.left {
|
|
||||||
left: 0px;
|
|
||||||
}
|
|
||||||
%menu-panel:not(.above) {
|
|
||||||
top: 28px;
|
|
||||||
}
|
|
||||||
%menu-panel.above {
|
|
||||||
bottom: 42px;
|
|
||||||
}
|
|
||||||
%menu-panel > ul {
|
|
||||||
margin: 0;
|
|
||||||
padding: 4px 0;
|
|
||||||
}
|
|
||||||
%menu-panel > ul,
|
|
||||||
%menu-panel > ul > li,
|
|
||||||
%menu-panel > ul > li > * {
|
|
||||||
width: 100%;
|
|
||||||
}
|
|
||||||
%menu-panel > ul > li > * {
|
|
||||||
text-align: left !important;
|
|
||||||
}
|
|
||||||
%menu-panel-separator {
|
|
||||||
padding-top: 0.35em;
|
|
||||||
}
|
|
||||||
%menu-panel-separator:not(:first-child) {
|
|
||||||
margin-top: 0.35em;
|
|
||||||
}
|
|
||||||
%menu-panel-separator:not(:empty) {
|
|
||||||
padding-left: 1em;
|
|
||||||
padding-right: 1em;
|
|
||||||
padding-bottom: 0.1em;
|
|
||||||
}
|
|
||||||
%menu-panel-header {
|
%menu-panel-header {
|
||||||
padding: 10px;
|
padding: 0.625rem var(--padding-x); /* 10px */
|
||||||
white-space: normal;
|
white-space: normal;
|
||||||
}
|
}
|
||||||
/* here the !important is only needed for what seems to be a difference */
|
|
||||||
/* with the CSS before and after compression */
|
|
||||||
/* i.e. before compression this style is applied */
|
|
||||||
/* after compression it is in the source but doesn't seem to get */
|
|
||||||
/* applied (unless you add the !important) */
|
|
||||||
%menu-panel .is-active {
|
|
||||||
position: relative !important;
|
|
||||||
}
|
|
||||||
%menu-panel .is-active > *::after {
|
|
||||||
position: absolute;
|
|
||||||
top: 50%;
|
|
||||||
margin-top: -8px;
|
|
||||||
right: 10px;
|
|
||||||
}
|
|
||||||
%menu-panel-header::before {
|
|
||||||
position: absolute;
|
|
||||||
left: 15px;
|
|
||||||
top: calc(10px + 0.1em);
|
|
||||||
}
|
|
||||||
%menu-panel-header {
|
%menu-panel-header {
|
||||||
max-width: fit-content;
|
max-width: fit-content;
|
||||||
}
|
}
|
||||||
|
@ -118,3 +10,63 @@
|
||||||
max-width: 200px;
|
max-width: 200px;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
%menu-panel-header::before {
|
||||||
|
position: absolute;
|
||||||
|
left: 15px;
|
||||||
|
top: calc(10px + 0.1em);
|
||||||
|
}
|
||||||
|
|
||||||
|
%menu-panel-body {
|
||||||
|
margin: 0;
|
||||||
|
padding: calc(var(--padding-y) - 0.625rem) 0; /* 10px */
|
||||||
|
}
|
||||||
|
%menu-panel-body,
|
||||||
|
%menu-panel-item,
|
||||||
|
%menu-panel-item > * {
|
||||||
|
width: 100%;
|
||||||
|
}
|
||||||
|
%menu-panel-item,
|
||||||
|
%menu-panel-button {
|
||||||
|
text-align: left;
|
||||||
|
}
|
||||||
|
%menu-panel-badge {
|
||||||
|
padding: 0 8px;
|
||||||
|
margin-left: 0.5rem; /* 8px */
|
||||||
|
}
|
||||||
|
%menu-panel-button {
|
||||||
|
display: flex;
|
||||||
|
}
|
||||||
|
%menu-panel-button::after {
|
||||||
|
margin-left: auto;
|
||||||
|
/* as we are using margin-left for right align */
|
||||||
|
/* we can't use it for an absolute margin-left */
|
||||||
|
/* so cheat with a bit of padding/translate */
|
||||||
|
padding-right: var(--padding-x);
|
||||||
|
transform: translate(calc(var(--padding-x) / 2), 0);
|
||||||
|
}
|
||||||
|
%menu-panel-separator {
|
||||||
|
padding-top: 0.375rem; /* 6px */
|
||||||
|
}
|
||||||
|
%menu-panel-separator:not(:first-child) {
|
||||||
|
margin-top: 0.275rem; /* 6px */
|
||||||
|
}
|
||||||
|
%menu-panel-separator:not(:empty) {
|
||||||
|
padding-left: var(--padding-x);
|
||||||
|
padding-right: var(--padding-x);
|
||||||
|
padding-bottom: 0.125rem; /* 2px */
|
||||||
|
}
|
||||||
|
|
||||||
|
%menu-panel.menu-panel-confirming {
|
||||||
|
overflow: hidden;
|
||||||
|
}
|
||||||
|
%menu-panel-confirmation {
|
||||||
|
position: absolute;
|
||||||
|
top: 0;
|
||||||
|
left: calc(100% + 10px);
|
||||||
|
}
|
||||||
|
%menu-panel-body {
|
||||||
|
transition: transform 150ms;
|
||||||
|
}
|
||||||
|
%menu-panel.menu-panel-confirming > ul {
|
||||||
|
transform: translateX(calc(-100% - 10px));
|
||||||
|
}
|
||||||
|
|
|
@ -1,34 +1,32 @@
|
||||||
%menu-panel {
|
%menu-panel-button-selected::after {
|
||||||
border: var(--decor-border-100);
|
@extend %with-check-plain-mask, %as-pseudo;
|
||||||
border-radius: var(--decor-radius-200);
|
|
||||||
box-shadow: var(--decor-elevation-600);
|
|
||||||
}
|
|
||||||
%menu-panel > ul > li {
|
|
||||||
list-style-type: none;
|
|
||||||
}
|
}
|
||||||
%menu-panel-header {
|
%menu-panel-header {
|
||||||
@extend %p2;
|
@extend %p2;
|
||||||
}
|
}
|
||||||
|
%menu-panel-header + ul {
|
||||||
|
border-top: var(--decor-border-100);
|
||||||
|
border-color: rgb(var(--tone-border, var(--tone-gray-300)));
|
||||||
|
}
|
||||||
|
/* if the first item is a separator and it */
|
||||||
|
/* contains text don't add a line */
|
||||||
|
%menu-panel-separator:first-child:not(:empty) {
|
||||||
|
border: none;
|
||||||
|
}
|
||||||
%menu-panel-separator {
|
%menu-panel-separator {
|
||||||
@extend %p3;
|
@extend %p3;
|
||||||
text-transform: uppercase;
|
text-transform: uppercase;
|
||||||
font-weight: var(--typo-weight-medium);
|
font-weight: var(--typo-weight-medium);
|
||||||
}
|
|
||||||
%menu-panel-header + ul,
|
|
||||||
%menu-panel-separator:not(:first-child) {
|
|
||||||
border-top: var(--decor-border-100);
|
|
||||||
}
|
|
||||||
%menu-panel .is-active > *::after {
|
|
||||||
@extend %with-check-plain-mask, %as-pseudo;
|
|
||||||
}
|
|
||||||
%menu-panel {
|
|
||||||
border-color: rgb(var(--tone-gray-300));
|
|
||||||
background-color: rgb(var(--tone-gray-000));
|
|
||||||
}
|
|
||||||
%menu-panel-separator {
|
|
||||||
color: rgb(var(--tone-gray-600));
|
color: rgb(var(--tone-gray-600));
|
||||||
}
|
}
|
||||||
%menu-panel-header + ul,
|
%menu-panel-item {
|
||||||
%menu-panel-separator:not(:first-child) {
|
list-style-type: none;
|
||||||
border-color: rgb(var(--tone-gray-300));
|
}
|
||||||
|
%menu-panel-badge {
|
||||||
|
@extend %pill;
|
||||||
|
color: rgb(var(--tone-gray-000));
|
||||||
|
background-color: rgb(var(--tone-gray-500));
|
||||||
|
}
|
||||||
|
%menu-panel-body .informed-action {
|
||||||
|
border: 0 !important;
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,6 +1,9 @@
|
||||||
<Action
|
<Action
|
||||||
role="menuitem"
|
role="menuitem"
|
||||||
...attributes
|
...attributes
|
||||||
|
@href={{@href}}
|
||||||
|
{{on 'click' (if @href @disclosure.close (noop))}}
|
||||||
|
@external={{@external}}
|
||||||
>
|
>
|
||||||
{{yield}}
|
{{yield}}
|
||||||
</Action>
|
</Action>
|
||||||
|
|
|
@ -9,7 +9,7 @@
|
||||||
}}
|
}}
|
||||||
>
|
>
|
||||||
{{yield (hash
|
{{yield (hash
|
||||||
Action=(component 'menu/action')
|
Action=(component 'menu/action' disclosure=@disclosure)
|
||||||
Item=(component 'menu/item')
|
Item=(component 'menu/item')
|
||||||
Separator=(component 'menu/separator')
|
Separator=(component 'menu/separator')
|
||||||
)}}
|
)}}
|
||||||
|
|
|
@ -1,6 +1,4 @@
|
||||||
<li
|
<li
|
||||||
role="separator"
|
role="separator"
|
||||||
...attributes
|
...attributes
|
||||||
>
|
>{{yield}}</li>
|
||||||
{{yield}}
|
|
||||||
</li>
|
|
||||||
|
|
|
@ -0,0 +1,126 @@
|
||||||
|
---
|
||||||
|
type: css
|
||||||
|
---
|
||||||
|
# Panel
|
||||||
|
|
||||||
|
Very basic 'panel' card-like CSS component currently used for menu-panels.
|
||||||
|
|
||||||
|
When building components using `panel` please make use of the CSS custom
|
||||||
|
properties available to help maintain consistency within the panel.
|
||||||
|
|
||||||
|
**Very important**: Please avoid using style attributes for doing this the
|
||||||
|
below is only for illustrative purposes. Please use this CSS component as a
|
||||||
|
building block for other CSS instead.
|
||||||
|
|
||||||
|
|
||||||
|
```hbs preview-template
|
||||||
|
<figure>
|
||||||
|
<figcaption>Panel with no padding (in dark mode)</figcaption>
|
||||||
|
<div
|
||||||
|
class={{class-map
|
||||||
|
"panel"
|
||||||
|
"theme-dark"
|
||||||
|
}}
|
||||||
|
...attributes
|
||||||
|
>
|
||||||
|
<div>
|
||||||
|
<p>Some text purposefully with no padding</p>
|
||||||
|
</div>
|
||||||
|
<hr />
|
||||||
|
<div>
|
||||||
|
<p>Along with a separator ^ again purposefully with no padding</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</figure>
|
||||||
|
<figure>
|
||||||
|
<figcaption>Panel using inherited padding for consistency</figcaption>
|
||||||
|
<div
|
||||||
|
class={{class-map
|
||||||
|
"panel"
|
||||||
|
}}
|
||||||
|
...attributes
|
||||||
|
>
|
||||||
|
<Action
|
||||||
|
style={{style-map
|
||||||
|
(array 'width' '100%')
|
||||||
|
(array 'border-bottom' '1px solid rgb(var(--tone-border))')
|
||||||
|
(array 'padding' 'var(--padding-x) var(--padding-y)')
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
Full Width Button
|
||||||
|
</Action>
|
||||||
|
<div
|
||||||
|
style={{style-map
|
||||||
|
(array 'padding' 'var(--padding-x) var(--padding-y)')
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
<p>Some text with padding</p>
|
||||||
|
</div>
|
||||||
|
<hr />
|
||||||
|
<div
|
||||||
|
style={{style-map
|
||||||
|
(array 'padding' 'var(--padding-x) var(--padding-y)')
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
<p>Along with a separator ^ again with padding</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</figure>
|
||||||
|
<figure>
|
||||||
|
<figcaption>Panel using larger padding and different color borders</figcaption>
|
||||||
|
<div
|
||||||
|
class={{class-map
|
||||||
|
"panel"
|
||||||
|
}}
|
||||||
|
style={{style-map
|
||||||
|
(array '--padding-x' '24px')
|
||||||
|
(array '--padding-y' '24px')
|
||||||
|
(array '--tone-border' 'var(--tone-strawberry-500)')
|
||||||
|
}}
|
||||||
|
...attributes
|
||||||
|
>
|
||||||
|
<Action
|
||||||
|
style={{style-map
|
||||||
|
(array 'width' '100%')
|
||||||
|
(array 'border-bottom' '1px solid rgb(var(--tone-border))')
|
||||||
|
(array 'padding' 'var(--padding-x) var(--padding-y)')
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
Full Width Button
|
||||||
|
</Action>
|
||||||
|
<div
|
||||||
|
style={{style-map
|
||||||
|
(array 'padding' 'var(--padding-x) var(--padding-y)')
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
<p>Some text with padding</p>
|
||||||
|
</div>
|
||||||
|
<hr />
|
||||||
|
<div
|
||||||
|
style={{style-map
|
||||||
|
(array 'padding' 'var(--padding-x) var(--padding-y)')
|
||||||
|
}}
|
||||||
|
>
|
||||||
|
<p>Along with a separator ^ again with padding</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</figure>
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
```css
|
||||||
|
.panel {
|
||||||
|
@extend %panel;
|
||||||
|
}
|
||||||
|
.panel hr {
|
||||||
|
@extend %panel-separator;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## CSS Properties
|
||||||
|
|
||||||
|
| Property | Type | Default | Description |
|
||||||
|
| --- | --- | --- | --- |
|
||||||
|
| `--tone-border` | `color` | --tone-gray-300 | Default color for all borders |
|
||||||
|
| `--padding-x` | `length` | 14px | Default x padding to be used for padding values within the component |
|
||||||
|
| `--padding-y` | `length` | 14px | Default y padding to be used for padding values within the component |
|
|
@ -0,0 +1,8 @@
|
||||||
|
#docfy-demo-preview-panel {
|
||||||
|
.panel {
|
||||||
|
@extend %panel;
|
||||||
|
}
|
||||||
|
.panel hr {
|
||||||
|
@extend %panel-separator;
|
||||||
|
}
|
||||||
|
}
|
|
@ -0,0 +1,2 @@
|
||||||
|
@import './skin';
|
||||||
|
@import './layout';
|
|
@ -0,0 +1,11 @@
|
||||||
|
%panel {
|
||||||
|
--padding-x: 14px;
|
||||||
|
--padding-y: 14px;
|
||||||
|
/* max-height: var(--panel-height, auto); */
|
||||||
|
}
|
||||||
|
%panel {
|
||||||
|
position: relative;
|
||||||
|
}
|
||||||
|
%panel-separator {
|
||||||
|
margin: 0;
|
||||||
|
}
|
|
@ -0,0 +1,17 @@
|
||||||
|
%panel {
|
||||||
|
--tone-border: var(--tone-gray-300);
|
||||||
|
border: var(--decor-border-100);
|
||||||
|
border-radius: var(--decor-radius-200);
|
||||||
|
box-shadow: var(--decor-elevation-600);
|
||||||
|
}
|
||||||
|
%panel-separator {
|
||||||
|
border-top: var(--decor-border-100);
|
||||||
|
}
|
||||||
|
%panel {
|
||||||
|
color: rgb(var(--tone-gray-900));
|
||||||
|
background-color: rgb(var(--tone-gray-000));
|
||||||
|
}
|
||||||
|
%panel,
|
||||||
|
%panel-separator {
|
||||||
|
border-color: rgb(var(--tone-border));
|
||||||
|
}
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue