Commit Graph

422 Commits

Author SHA1 Message Date
Pierre Souchay 6f91085869 Use lower case for serviceName computation of cache keys 2021-02-09 19:19:40 +01:00
R.B. Boyer 03790a1f91
server: add OSS stubs supporting validation of source namespaces in service-intentions config entries (#9527) 2021-01-25 11:27:38 -06:00
Daniel Nephin 29ce5ec575 structs: fix caching of ServiceSpecificRequest when ingress=true
The field was not being included in the cache info key. This would result in a DNS request for
web.service.consul returning the same result as web.ingress.consul, when those results should
not be the same.
2021-01-14 17:01:40 -05:00
kevinkengne 2e7e78999d
add completeness test for types with CacheInfo method (#9480)
include all fields when fuzzing in tests
split tests by struct type

Ensure the new value for the field is different

fuzzer.Fuzz could produce the same value again in some cases.

Use a custom fuzz function for QueryOptions. That type is an embedded struct in the request types
but only one of the fields is important to include in the cache key.

Move enterpriseMetaField to an oss file so that we can change it in enterprise.
2021-01-12 19:45:46 -05:00
Daniel Nephin 2187808803 structs: add tests for String() methods
To show that printing one of these IDs works properly now that the String() method
receiver is no longer a pointer.
2021-01-07 18:47:38 -05:00
Daniel Nephin d113f0e690 structs: Fix printing of IDs
These types are used as values (not pointers) in other structs. Using a pointer receiver causes
problems when the value is printed. fmt will not call the String method if it is passed a value
and the String method has a pointer receiver. By using a value receiver the correct string is printed.

Also remove some unused methods.
2021-01-07 18:47:38 -05:00
Matt Keeler 85e5da53d5
Special case the error returned when we have a Raft leader but are not tracking it in the ServerLookup (#9487)
This can happen when one other node in the cluster such as a client is unable to communicate with the leader server and sees it as failed. When that happens its failing status eventually gets propagated to the other servers in the cluster and eventually this can result in RPCs returning “No cluster leader” error.

That error is misleading and unhelpful for determing the root cause of the issue as its not raft stability but rather and client -> server networking issue. Therefore this commit will add a new error that will be returned in that case to differentiate between the two cases.
2021-01-04 14:05:23 -05:00
R.B. Boyer d921690bfe
acl: global tokens created by auth methods now correctly replicate to secondary datacenters (#9351)
Previously the tokens would fail to insert into the secondary's state
store because the AuthMethod field of the ACLToken did not point to a
known auth method from the primary.
2020-12-09 15:22:29 -06:00
Kyle Havlovitz c4eff420be
Merge pull request #9009 from hashicorp/update-secondary-ca
connect: Fix an issue with updating CA config in a secondary datacenter
2020-11-30 14:49:28 -08:00
Kyle Havlovitz 0bfda4481f Add CA server delegate interface for testing 2020-11-19 20:08:06 -08:00
Kyle Havlovitz 9be7c6401c connect: update some function comments in CA manager 2020-11-17 16:00:19 -08:00
Matt Keeler 66fd23d67f
Refactor to call non-voting servers read replicas (#9191)
Co-authored-by: Kit Patella <kit@jepsen.io>
2020-11-17 10:53:57 -05:00
R.B. Boyer c7233ba871
server: remove config entry CAS in legacy intention API bridge code (#9151)
Change so line-item intention edits via the API are handled via the state store instead of via CAS operations.

Fixes #9143
2020-11-13 14:42:21 -06:00
R.B. Boyer c003871c54
server: break up Intention.Apply monolithic method (#9007)
The Intention.Apply RPC is quite large, so this PR attempts to break it down into smaller functions and dissolves the pre-config-entry approach to the breakdown as it only confused things.
2020-11-13 09:15:39 -06:00
Matt Keeler c048e86bb2
Switch to using the external autopilot module 2020-11-09 09:22:11 -05:00
Daniel Nephin b532e092dc structs: add a namespace test for CheckServiceNode.CanRead 2020-10-30 15:07:04 -04:00
Daniel Nephin c42fe5ae43 subscribe: set the request namespace 2020-10-30 14:34:04 -04:00
Daniel Nephin a5dd2001cf stream: remove Event.Key
Makes Payload a type with FilterByKey so that Payloads can implement
filtering by key. With this approach we don't need to expose a Namespace
field on Event, and we don't need to invest micro formats or require a
bunch of code to be aware of exactly how the key field is encoded.
2020-10-28 16:48:04 -04:00
R.B. Boyer 58387fef0a
server: config entry replication now correctly uses namespaces in comparisons (#9024)
Previously config entries sharing a kind & name but in different
namespaces could occasionally cause "stuck states" in replication
because the namespace fields were ignored during the differential
comparison phase.

Example:

Two config entries written to the primary:

    kind=A,name=web,namespace=bar
    kind=A,name=web,namespace=foo

Under the covers these both get saved to memdb, so they are sorted by
all 3 components (kind,name,namespace) during natural iteration. This
means that before the replication code does it's own incomplete sort,
the underlying data IS sorted by namespace ascending (bar comes before
foo).

After one pass of replication the primary and secondary datacenters have
the same set of config entries present. If
"kind=A,name=web,namespace=bar" were to be deleted, then things get
weird. Before replication the two sides look like:

primary: [
    kind=A,name=web,namespace=foo
]
secondary: [
    kind=A,name=web,namespace=bar
    kind=A,name=web,namespace=foo
]

The differential comparison phase walks these two lists in sorted order
and first compares "kind=A,name=web,namespace=foo" vs
"kind=A,name=web,namespace=bar" and falsely determines they are the SAME
and are thus cause an update of "kind=A,name=web,namespace=foo". Then it
compares "<nothing>" with "kind=A,name=web,namespace=foo" and falsely
determines that the latter should be DELETED.

During reconciliation the deletes are processed before updates, and so
for a brief moment in the secondary "kind=A,name=web,namespace=foo" is
erroneously deleted and then immediately restored.

Unfortunately after this replication phase the final state is identical
to the initial state, so when it loops around again (rate limited) it
repeats the same set of operations indefinitely.
2020-10-23 13:41:54 -05:00
Freddy 9c04cbc40f
Add HasExact to topology endpoint (#9010) 2020-10-23 10:45:41 -06:00
Daniel Nephin 8b601fdcac
Merge pull request #8771 from amenzhinsky/fix-grpc-use-tls-mapping
Fix GRPCUseTLS flag HTTP API mapping
2020-10-21 18:37:11 -04:00
s-christoff 9bb348c6c7
Enhance the output of consul snapshot inspect (#8787) 2020-10-09 14:57:29 -05:00
Freddy 13df5d5bf8
Add protocol to the topology endpoint response (#8868) 2020-10-08 17:31:54 -06:00
Daniel Nephin da6400192b
Merge pull request #8768 from hashicorp/streaming/add-subscribe-service
subscribe: add subscribe service for streaming change events
2020-10-07 21:24:03 -04:00
Freddy da91e999f6
Return intention info in svc topology endpoint (#8853) 2020-10-07 18:35:34 -06:00
Daniel Nephin 21c21191f4 structs: add CheckServiceNode.CanRead
And use it from the subscribe endpoint.
2020-10-07 18:15:13 -04:00
R.B. Boyer 1b413b0444
connect: support defining intentions using layer 7 criteria (#8839)
Extend Consul’s intentions model to allow for request-based access control enforcement for HTTP-like protocols in addition to the existing connection-based enforcement for unspecified protocols (e.g. tcp).
2020-10-06 17:09:13 -05:00
R.B. Boyer a2a8e9c783
connect: intentions are now managed as a new config entry kind "service-intentions" (#8834)
- Upgrade the ConfigEntry.ListAll RPC to be kind-aware so that older
copies of consul will not see new config entries it doesn't understand
replicate down.

- Add shim conversion code so that the old API/CLI method of interacting
with intentions will continue to work so long as none of these are
edited via config entry endpoints. Almost all of the read-only APIs will
continue to function indefinitely.

- Add new APIs that operate on individual intentions without IDs so that
the UI doesn't need to implement CAS operations.

- Add a new serf feature flag indicating support for
intentions-as-config-entries.

- The old line-item intentions way of interacting with the state store
will transparently flip between the legacy memdb table and the config
entry representations so that readers will never see a hiccup during
migration where the results are incomplete. It uses a piece of system
metadata to control the flip.

- The primary datacenter will begin migrating intentions into config
entries on startup once all servers in the datacenter are on a version
of Consul with the intentions-as-config-entries feature flag. When it is
complete the old state store representations will be cleared. We also
record a piece of system metadata indicating this has occurred. We use
this metadata to skip ALL of this code the next time the leader starts
up.

- The secondary datacenters continue to run the old intentions
replicator until all servers in the secondary DC and primary DC support
intentions-as-config-entries (via serf flag). Once this condition it met
the old intentions replicator ceases.

- The secondary datacenters replicate the new config entries as they are
migrated in the primary. When they detect that the primary has zeroed
it's old state store table it waits until all config entries up to that
point are replicated and then zeroes its own copy of the old state store
table. We also record a piece of system metadata indicating this has
occurred. We use this metadata to skip ALL of this code the next time
the leader starts up.
2020-10-06 13:24:05 -05:00
R.B. Boyer 4998a08c56
server: create new memdb table for storing system metadata (#8703)
This adds a new very tiny memdb table and corresponding raft operation
for updating a very small effective map[string]string collection of
"system metadata". This can persistently record a fact about the Consul
state machine itself.

The first use of this feature will come in a later PR.
2020-10-06 10:08:37 -05:00
freddygv 413a894a1a Do not evaluate discovery chain for topology upstreams 2020-10-05 10:24:50 -06:00
freddygv cf7b7fcdd6 Single DB txn for ServiceTopology and other PR comments 2020-10-05 10:24:50 -06:00
freddygv f906b94351 Add func to combine up+downstream queries 2020-10-05 10:24:50 -06:00
freddygv b012d8374e support querying upstreams/downstreams from registrations 2020-10-05 10:24:50 -06:00
Aliaksandr Mianzhynski 74cfba7065 Fix GRPCUseTLS flag HTTP API mapping 2020-09-29 18:29:56 +03:00
freddygv 9fa1b13df9 Resolve conflicts 2020-09-29 08:59:18 -06:00
R.B. Boyer 0fb088aac3
agent: make the json/hcl decoding of ConnectProxyConfig fully work with CamelCase and snake_case (#8741)
Fixes #7418
2020-09-24 13:58:52 -05:00
Paul Banks 7d58901ae8
Fix bad int -> string conversions caught by go vet changes in 1.15 (#8739) 2020-09-24 11:14:07 +01:00
Kyle Havlovitz 35bb09f85c
Merge pull request #8646 from hashicorp/common-intermediate-ttl
Move IntermediateCertTTL to common CA config
2020-09-15 12:03:29 -07:00
freddygv 7fd518ff1d Merge master 2020-09-14 16:17:43 -06:00
freddygv 7b9d1b41d5 Resolve conflicts against master 2020-09-11 18:41:58 -06:00
freddygv 768dbaa68d Add session flag to cookie config 2020-09-11 18:34:03 -06:00
freddygv eab90ea9fa Revert EnvoyConfig nesting 2020-09-11 09:21:43 -06:00
Kyle Havlovitz 2f7210bde2 Move IntermediateCertTTL to common CA config 2020-09-10 00:23:22 -07:00
freddygv cd4cf5161f Update resolver defaulting 2020-09-03 13:08:44 -06:00
freddygv eaa250cc80 Ensure resolver node with LB isn't considered default 2020-09-03 08:55:57 -06:00
freddygv ef877449ce Move valid policies to pkg level 2020-09-02 15:49:03 -06:00
freddygv f81fe6a1a1 Remove LB infix and move injection to xds 2020-09-02 15:13:50 -06:00
R.B. Boyer 119e945c3e
connect: all config entries pick up a meta field (#8596)
Fixes #8595
2020-09-02 14:10:25 -05:00
R.B. Boyer d0f74cd1e8
connect: fix bug in preventing some namespaced config entry modifications (#8601)
Whenever an upsert/deletion of a config entry happens, within the open
state store transaction we speculatively test compile all discovery
chains that may be affected by the pending modification to verify that
the write would not create an erroneous scenario (such as splitting
traffic to a subset that did not exist).

If a single discovery chain evaluation references two config entries
with the same kind and name in different namespaces then sometimes the
upsert/deletion would be falsely rejected. It does not appear as though
this bug would've let invalid writes through to the state store so the
correction does not require a cleanup phase.
2020-09-02 10:47:19 -05:00
freddygv 63f79e5f9b Restructure structs and other PR comments 2020-09-02 09:10:50 -06:00