* Add Partition to all our models
* Add partitions into our serializers/fingerprinting
* Make some amends to a few adapters ready for partitions
* Amend blueprints to avoid linting error
* Update all our repositories to include partitions, also
Remove enabled/disable nspace repo and just use a nspace with
conditionals
* Ensure nspace and parition parameters always return '' no matter what
* Ensure data-sink finds the model properly
This will later be replaced by a @dataSink decorator but we are find
kicking that can down the road a little more
* Add all the new partition data layer
* Add a way to set the title of the page from inside the route
and make it accessibile via a route announcer
* Make the Consul Route the default/basic one
* Tweak nspace and partition abilities not to check the length
* Thread partition through all the components that need it
* Some ACL tweaks
* Move the entire app to use partitions
* Delete all the tests we no longer need
* Update some Unit tests to use partition
* Fix up KV title tests
* Fix up a few more acceptance tests
* Fixup and temporarily ignore some acceptance tests
* Stop using ember-cli-page-objects fillable as it doesn't seem to work
* Fix lint error
* Remove old ACL related test
* Add a tick after filling out forms
* Fix token warning modal
* Found some more places where we need a partition var
* Fixup some more acceptance tests
* Tokens still needs a repo service for CRUD
* Remove acceptance tests we no longer need
* Fixup and "FIXME ignore" a few tests
* Remove an s
* Disable blocking queries for KV to revert to previous release for now
* Fixup adapter tests to follow async/function resolving interface
* Fixup all the serializer integration tests
* Fixup service/repo integration tests
* Fixup deleting acceptance test
* Fixup some ent tests
* Make sure nspaces passes the dc through for when thats important
* ...aaaand acceptance nspaces with the extra dc param
This commit fixes 2 problems with our OIDC flow in the UI, the first is straightforwards, the second is relatively more in depth:
1: A typo (1.10.1 only)
During #10503 we injected our settings service into the our oidc-provider service, there are some comments in the PR as to the whys and wherefores for this change (https://github.com/hashicorp/consul/pull/10503/files#diff-aa2ffda6d0a966ba631c079fa3a5f60a2a1bdc7eed5b3a98ee7b5b682f1cb4c3R28)
Fixing the typo so it was no longer looking for an unknown service (repository/settings > settings)
fixed this.
2: URL encoding (1.9.x, 1.10.x)
TL;DR: /oidc/authorize/provider/with/slashes/code/with/slashes/status/with/slashes should be /oidc/authorize/provider%2Fwith%2Fslashes/code%2Fwith%2Fslashes/status%2Fwith%2Fslashes
When we receive our authorization response back from the OIDC 3rd party, we POST the code and status data from that response back to consul via acallback as part of the OIDC flow. From what I remember back when this feature was originally added, the method is a POST request to avoid folks putting secret-like things into API requests/URLs/query params that are more likely to be visible to the human eye, and POSTing is expected behaviour.
Additionally, in the UI we identify all external resources using unique resource identifiers. Our OIDC flow uses these resources and their identifiers to perform the OIDC flow using a declarative state machine. If any information in these identifiers uses non-URL-safe characters then these characters require URL encoding and we added a helper a while back to specifically help us to do this once we started using this for things that required URL encoding.
The final fix here make sure that we URL encode code and status before using them with one of our unique resource identifiers, just like we do with the majority of other places where we use these identifiers.
The default namespace, and the tokens default namespace (or its origin namespace) is slightly more complicated than other things we deal with in the UI, there's plenty of info/docs on this that I've added in this PR.
Previously:
When a namespace was not specified in the URL, we used to default to the default namespace. When you logged in using a token we automatically forward you the namespace URL that your token originates from, so you are then using the namespace for your token by default. You can of course then edit the URL to remove the namespace portion, or perhaps revisit the UI at the root path with you token already set. In these latter cases we would show you information from the default namespace. So if you had no namespace segment/portion in the URL, we would assume default, perform actions against the default namespace and highlight the default namespace in the namespace selector menu. If you wanted to perform actions in your tokens origin namespace you would have to manually select it from the namespace selector menu.
This PR:
Now, when you have no namespace segment/portion in the URL, we use the token's origin namespace instead (and if you don't have a token, we then use the default namespace like it was previously)
Notes/thoughts:
I originally thought we were showing an incorrectly selected namespace in the namespace selector, but it also matched up with what we were doing with the API, so it was in fact correct. The issue was more that we weren't selecting the origin namespace of the token for the user when a namespace segment was omitted from the URL. Seeing as we automatically forward you to the tokens origin namespace when you log in, and we were correctly showing the namespace we were acting on when you had no namespace segment in the URL (in the previous case default), I'm not entirely sure how much of an issue this actually was.
This characteristic of namespace+token+namespace is a little weird and its easy to miss a subtlety or two so I tried to add some documentation in here for future me/someone else (including some in depth code comment around one of the API endpoints where this is very subtle and very hard to miss). I'm not the greatest at words, so would be great to get some edits there if it doesn't seem clear to folks.
The fact that we used to save your previous datacenter and namespace into local storage for reasons also meant the interaction here was slightly more complicated than it needed to be, so whilst we were here we rejigged things slightly to satisfy said reasons still but not use local storage (we try and grab the info from higher up). A lot of the related code here is from before we had our Routlets which I think could probably make all of this a lot less complicated, but I didn't want to do a wholesale replacement in this PR, we can save that for a separate PR on its own at some point.
We use a `<DataSource @src={{url}} />` component throughout our UI for when we want to load data from within our components. The URL specified as the `@src` is used to map/lookup what is used in to retrieve data, for example we mostly use our repository methods wrapped with our Promise backed `EventSource` implementation, but DataSource URLs can also be mapped to EventTarget backed `EventSource`s and native `EventSource`s or `WebSockets` if we ever need to use those (for example these are options for potential streaming support with the Consul backend).
The URL to function/method mapping previous to this PR used a very naive humongous `switch` statement which was a temporary 'this is fine for the moment' solution, although we'd always wanted to replace with something more manageable.
Here we add `wayfarer` as a dependency - a very small (1kb), very fast, radix trie based router, and use that to perform the URL to function/method mapping.
This essentially turns every `DataSource` into a very small SPA - change its URL and the view of data changes. When the data itself changes, either the yielded view of data changes or the `onchange` event is fired with the changed data, making the externally sourced view of data completely reactive.
```javascript
// use the new decorator a service somewhere to annotate/decorate
// a method with the URL that can be used to access this method
@dataSource('/:ns/:dc/services')
async findAllByDatacenter(params) {
// get the data
}
// can use with JS in a route somewhere
async model() {
return this.data.source(uri => uri`/${nspace}/${dc}/services`)
}
```
```hbs
{{!-- or just straight in a template using the component --}}
<DataSource @src="/default/dc1/services" @onchange="" />
```
This also uses a new `container` Service to automatically execute/import certain services yet not execute them. This new service also provides a lookup that supports both standard ember DI lookup plus Class based lookup or these specific services. Lastly we also provide another debug function called DataSourceRoutes() which can be called from console which gives you a list of URLs and their mappings.
* ui: Apply native class codemod to all services
* ui: Apply native class codemod to routes
* ui: Apply native class codemod to controllers
* Fix up ember proxy `content` issue
* Add a CreateTime on policy creation
* Minor formatting
* Convert child based saving to use ec instead of custom approach
* Remove custom event source repo wrapping initializer
* Repos here are no longer proxy objects revert to using them normally
* Remove areas of code that were used to set up source backed repos
* ui: Add the most basic workspace root in /ui
* We already have a LICENSE file in the repository root
* Change directory path in build scripts ui-v2 -> ui
* Make yarn install flags configurable from elsewhere
* Minimal workspace root makefile
* Call the new docker specific target
* Update yarn in the docker build image
* Reconfigure the netlify target and move to the higher makefile
* Move ui-v2 -> ui/packages/consul-ui
* Change repo root to refleect new folder structure
* Temporarily don't hoist consul-api-double
* Fixup CI configuration
* Fixup lint errors
* Fixup Netlify target