The fields in the certs are meant to hold the original binary
representation of this data, not some ascii-encoded version.
The only time we should be colon-hex-encoding fields is for display
purposes or marshaling through non-TLS mediums (like RPC).
This only affects vault versions >=1.1.1 because the prior code
accidentally relied upon a bug that was fixed in
https://github.com/hashicorp/vault/pull/6505
The existing tests should have caught this, but they were using a
vendored copy of vault version 0.10.3. This fixes the tests by running
an actual copy of vault instead of an in-process copy. This has the
added benefit of changing the dependency on vault to just vault/api.
Also update VaultProvider to use similar SetIntermediate validation code
as the ConsulProvider implementation.
* Add build system support for protobuf generation
This is done generically so that we don’t have to keep updating the makefile to add another proto generation.
Note: anything not in the vendor directory and with a .proto extension will be run through protoc if the corresponding namespace.pb.go file is not up to date.
If you want to rebuild just a single proto file you can do so with: make proto-rebuild PROTOFILES=<list of proto files to rebuild>
Providing the PROTOFILES var will override the default behavior of finding all the .proto files.
* Start adding types to the agent/proto package
These will be needed for some other work and are by no means comprehensive.
* Add ability to resolve/fixup the agentpb.ACLLinks structure in the state store.
* Use protobuf marshalling of raft requests instead of msgpack for protoc generated types.
This does not change any encoding of existing types.
* Removed structs package automatically encoding with protobuf marshalling
Instead the caller of raftApply that wants to opt-in to protobuf encoding will have to call `raftApplyProtobuf`
* Run update-vendor to fixup modules.txt
Nothing changed as far as dependencies go but the ordering of modules in that file depends on the time they are first seen and its not alphabetical.
* Rename some things and implement the structs.RPCInfo interface bits
agentpb.QueryOptions and agentpb.WriteRequest implement 3 of the 4 RPCInfo funcs and the new TargetDatacenter message type implements the fourth.
* Use the right encoding function.
* Renamed agent/proto package to agent/agentpb to prevent package name conflicts
* Update modules.txt to fix ordering
* Change blockingQuery to take in interfaces for the query options and meta
* Add %T to error output.
* Add/Update some comments
All these changes should have no side-effects or change behavior:
- Use bytes.Buffer's String() instead of a conversion
- Use time.Since and time.Until where fitting
- Drop unnecessary returns and assignment
* Add State storage and LastResult argument into Cache so that cache.Types can safely store additional data that is eventually expired.
* New Leaf cache type working and basic tests passing. TODO: more extensive testing for the Root change jitter across blocking requests, test concurrent fetches for different leaves interact nicely with rootsWatcher.
* Add multi-client and delayed rotation tests.
* Typos and cleanup error handling in roots watch
* Add comment about how the FetchResult can be used and change ca leaf state to use a non-pointer state.
* Plumb test override of root CA jitter through TestAgent so that tests are deterministic again!
* Fix failing config test
This adds the `agent/connect/ca/plugin` library for consuming/serving Connect CA providers as [go-plugin](https://github.com/hashicorp/go-plugin) plugins. This **does not** wire this up in any way to Consul itself, so this will not enable using these plugins yet.
## Why?
We want to enable CA providers to be pluggable without modifying Consul so that any CA or PKI system can potentially back the Connect certificates. This CA system may also be used in the future for easier bootstrapping and internal cluster security.
### go-plugin
The benefit of `go-plugin` is that for the plugin consumer, the fact that the interface implementation is communicating over multi-process RPC is invisible. Internals of Consul will continue to just use `ca.Provider` interface implementations as if they're local. For plugin _authors_, they simply have to implement the interface. The network/transport/process management issues are handled by go-plugin itself.
The CA provider plugins support both `net/rpc` and gRPC transports. This enables easy authoring in any language. go-plugin handles the actual protocol handshake and connection. This is just a feature of go-plugin.
`go-plugin` is already in production use for years by Packer, Terraform, Nomad, Vault, and Sentinel. We've shown stability for both desktop and server-side software. It is very mature.
## Implementation Details
### `map[string]interface{}`
The `Configure` method passes a `map[string]interface{}`. This map contains only Go primitives and containers of primitives (no funcs, chans, etc.). For `net/rpc` we encode as-is using Gob. For gRPC we marshal to JSON and transmit as a `bytes` type. This is the same approach we take with Vault and other software.
Note that this is just the transport protocol, the end software views it fully decoded.
### `x509.Certificate` and `CertificateRequest`
We transmit the raw ASN.1 bytes and decode on the other side. Unit tests are verifying we get the same cert/csrs across the wire.
### Testing
`go-plugin` exposes test helpers that enable testing the full plugin RPC over real loopback network connections. We test all endpoints for success and error for both `net/rpc` and gRPC.
### Vendoring
This PR doesn't introduce vendoring for two reasons:
1. @banks's `f-envoy` branch introduces a lot of these and I didn't want conflict.
2. The library isn't actually used yet so it doesn't introduce compile-time errors (it does introduce test errors).
## Next Steps
With this in place, we need to figure out the proper way to actually hook these up to Consul, load them, etc. This discussion can happen elsewhere, since regardless of approach this plugin library implementation is the exact same.
* Add -enable-local-script-checks options
These options allow for a finer control over when script checks are enabled by
giving the option to only allow them when they are declared from the local
file system.
* Add documentation for the new option
* Nitpick doc wording
* Vendor updates for gRPC and xDS server
* xDS server implementation for serving Envoy as a Connect proxy
* Address initial review comments
* consistent envoy package aliases; typos fixed; override TLS and authz for custom listeners
* Moar Typos
* Moar typos
* Fix CA pruning when CA config uses string durations.
The tl;dr here is:
- Configuring LeafCertTTL with a string like "72h" is how we do it by default and should be supported
- Most of our tests managed to escape this by defining them as time.Duration directly
- Out actual default value is a string
- Since this is stored in a map[string]interface{} config, when it is written to Raft it goes through a msgpack encode/decode cycle (even though it's written from server not over RPC).
- msgpack decode leaves the string as a `[]uint8`
- Some of our parsers required string and failed
- So after 1 hour, a default configured server would throw an error about pruning old CAs
- If a new CA was configured that set LeafCertTTL as a time.Duration, things might be OK after that, but if a new CA was just configured from config file, intialization would cause same issue but always fail still so would never prune the old CA.
- Mostly this is just a janky error that got passed tests due to many levels of complicated encoding/decoding.
tl;dr of the tl;dr: Yay for type safety. Map[string]interface{} combined with msgpack always goes wrong but we somehow get bitten every time in a new way :D
We already fixed this once! The main CA config had the same problem so @kyhavlov already wrote the mapstructure DecodeHook that fixes it. It wasn't used in several places it needed to be and one of those is notw in `structs` which caused a dependency cycle so I've moved them.
This adds a whole new test thta explicitly tests the case that broke here. It also adds tests that would have failed in other places before (Consul and Vaul provider parsing functions). I'm not sure if they would ever be affected as it is now as we've not seen things broken with them but it seems better to explicitly test that and support it to not be bitten a third time!
* Typo fix
* Fix bad Uint8 usage
* Revert "BUGFIX: Unit test relying on WaitForLeader() did not work due to wrong test (#4472)"
This reverts commit cec5d72396.
* Revert "CA initialization while boostrapping and TestLeader_ChangeServerID fix. (#4493)"
This reverts commit 589b589b53.
These were only added as SPIFFE intends to use the in the future but currently does not mandate their usage due to patch support in common TLS implementations and some ambiguity over how to use them with URI SAN certificates. We included them because until now everything seem fine with it, however we've found the latest version of `openssl` (1.1.0h) fails to validate our certificats if its enabled. LibreSSL as installed on OS X by default doesn’t have these issues. For now it's most compatible not to have them and later we can find ways to add constraints with wider compatibility testing.