Commit Graph

220 Commits

Author SHA1 Message Date
Dan Funk 0ba2819867 Merge branch 'dev' into feature/approval_request_script 2020-05-29 04:51:50 -04:00
Dan Funk 11413838a7 Faster lookup fields. We were parsing the spec each time to get details about how to search. We're just grabbing the workflow id and task id now and building that straight into the full text search index for faster lookups. Should be peppy.
Another speed improvement - data in the FileDataModel is deferred, and not queried until it is specifically used, as the new data structures need to use this model frequently.
2020-05-29 01:39:39 -04:00
Dan Funk dba41f4759 Ludicrously stupid launch in a refactor of the way all files work in the system at a time where I crave sleep and peace above all other things.
Added a File class, that we wrap around the FileModel so the api endpoints don't change, but File no longer holds refences to versions or dates of the file_data model, we
figure this out based on a clean database structure.

The ApprovalFile is directly related to the file_data_model - so no chance that a reviewer would review the incorrect version of a file.py

Noticed that our FileType enum called "bpmn" "bpmm", hope this doesn't screw someone up.

Workflows are directly related to the data_models that create the workflow spec it needs.  So the files should always be there.  There are no more hashes, and thus no more hash errors where it can't find the files to rebuild the workflow.py

Not much to report here, other than I broke every single test in the system at one point.  So I'm super concerned about this, and will be testing it a lot before creating the pull request.
2020-05-28 20:03:50 -04:00
Carlos Lopez 41b26e28bd Merge branch 'dev' into feature/approval_request_script 2020-05-28 08:11:31 -06:00
Dan Funk cd7f67ab48 A major refactor of how we search and store files, as there was a lot of confusing bits in here.
From an API point of view you can do the following (and only the following)

/files?workflow_spec_id=x
* You can find all files associated with a workflow_spec_id, and add a file with a workflow_spec_id
/files?workflow_id=x
* You can find all files associated with a workflow_id, and add a file that is directly associated with the workflow
/files?workflow_id=x&form_field_key=y
* You can find all files associated with a form element on a running workflow, and add a new file.
   Note: you can add multiple files to the same form_field_key, IF they have different file names. If the same name, the original file is archived,
   and the new file takes its place.

The study endpoints always return a list of the file metadata associated with the study.  Removed /studies-files, but there is an
endpoint called

/studies/all  - that returns all the studies in the system, and does include their files.

On a deeper level:
 The File model no longer contains:
  - study_id,
  - task_id,
  - form_field_key

Instead, if the file is associated with workflow - then that is the one way it is connected to the study, and we use this relationship to find files for a study.
A file is never associated with a task_id, as these change when the workflow is reloaded.
The form_field_key must match the irb_doc_code, so when requesting files for a form field, we just look up the irb_doc_code.
2020-05-28 08:27:26 -04:00
Dan Funk 560263d1a3 Missed another test. 2020-05-27 14:45:00 -04:00
Dan Funk 77f72e408f Lookup Service now raises exact matches to the top. Very hackish, but it works. 2020-05-27 14:36:10 -04:00
Carlos Lopez 54640988a7 Update endpoint fixes 2020-05-27 12:06:32 -06:00
Dan Funk 229b5d5ece Forgot to add the test fixes. 2020-05-27 09:55:46 -04:00
Dan Funk d5e075db82 Order search results by relevancy in the lookup service. 2020-05-27 09:47:44 -04:00
Dan Funk 7869fa596e Protocol Builder isn't disabled on the dcos servers, trying to figure out why, and assure it isn't some sort of weird race condition. 2020-05-26 22:42:49 -04:00
Dan Funk eb15d172c1 I absolutely must fix our cascade deletes, but I sort of like the horror such chunks of code cause, we really need to protect these endpoints, and assure they never get called under normal circumstances, and raise some thoughtful errors. 2020-05-26 21:18:09 -04:00
Dan Funk ccbf374b40 Loads of bug fixes.
Modifed the request_approval to take a list of arguments, which works better for us... today.
UpdateStudy correctly handles validation.
WorkflowService correctly populates random values from lookup tables.
And several fixes down in Spiffworkflow, including a big bug where only the last item in a decision table made it through.
2020-05-26 20:06:50 -04:00
Dan Funk e9645fa3fd Renamed redirect_url to just redirect. 2020-05-26 15:47:41 -04:00
Carlos Lopez 72b59deeaf Completing tests 2020-05-26 10:21:36 -06:00
Dan Funk a14168362a Merge branch 'feature/support_ui_dashboard' into dev 2020-05-25 21:31:16 -04:00
Carlos Lopez 727274ae33 Using full approval payload to update record 2020-05-25 15:40:24 -06:00
Dan Funk be057e8758 Adding an "UpdateStudy" task that is able to update the data on the study model, useful for setting core data points on the model, such as setting the Primary Investigator, or altering the Study Title.
Fixing a bug where the validation of forms did not correctly process auto-complete fields.
Fixing a bug where the approvals script and the update study script could not process dot notation correctly.

Moved populate_random_data into the WorkflowService where it makes more sense.
2020-05-25 15:30:06 -04:00
Dan Funk 6cd4ef64d1 Fixing add_study api endpoint, so you can actually add a new "Study" with just some basic information.
Using the LDAP service for checking user details in development mode - even if you are using the back door.
Added a new Flask fucntion load-example-rrt-data that loads the rrt workflow, and not the CRC wrokflows.
Modified the "load-example-data" in the tests to use some test data, rather than loading up all the workflows[
in CRC each time, with a parameter to load crc data if that is required - which is enabled for just a handful of tests.
(Tests run in 1/4 the time now)
2020-05-25 12:29:05 -04:00
Dan Funk 971d9a58e9 As we now have an approval_service.py, I moved all the business logic into this service and out of the request_approval.py script. And moved all tests for these features into a test file for the service. Will make it easier to cross reference what is happening, as everything all happens in one file.
As many of the scripts need to know the workflow, and it's down in a weird parameter, moved this so it is passed in each time.
2020-05-24 16:13:15 -04:00
Carlos Lopez e9bd19b112 Fixing broken test 2020-05-24 01:22:14 -06:00
Carlos Lopez 49eb4b3f98 Making working endpoints for approvals 2020-05-23 23:53:48 -06:00
Dan Funk d39ef658a2 Made some modifications to the Approval so that it knows exactly what versions of every file are being sent for approval
Added the following columns:
  * date_created - so we know when the file was created
  * renamed workflow_version to just "version", because everything has a version,  this is the version of the request.
  * workflow_hash - this is just a quick way to see what files and versions are associated with the request, it could be factored out.
  * study - a quick relationship link to the study, so that this model is easier to use.
  * workflow - ditto
  * approval_files - these is a list from a new link table that links an approval to specific files and versions.

The RequestApproval is logically sound, but still needs some additional pieces in place to be callable from a BPMN workflow diagram.

Altered the file service to pick up on changes to files vs adding new files, so that versions are picked up correctly as
users modify their submission - adding new files or replacing existing ones.  Deleting files worries me, and I will need to revisit this.

The damn base test keeps giving me a headache, so I made changes there to see if clearing and dropping the database each time won't allow the tests to pass more consistently.

Lots more tests around the file service to make sure it is versioning user uploaded files correctly.

The "Test Request Approval Script" tries to find to assure the correct behavior as this is likely to be called many times repeatedly and with little knowledge of the internal system.  So it should just "do the right thing".
2020-05-23 15:08:17 -04:00
Aaron Louie 6c14248ef9 Adds 'v1.0/' to login route 2020-05-23 14:49:02 -04:00
Dan Funk 49c322177b
Merge pull request #76 from sartography/feature/rrp-endpoints
Feature/rrp endpoints
2020-05-22 19:38:26 -04:00
Dan Funk 148e86bb42 Building out the boilerplate code to make pushing forward on this a little friendlier.
There is an approval api file, and approval model file and an approval test file.
2020-05-22 18:25:00 -04:00
Dan Funk d1606ffb1a forgot to include the new empty master workflow, which allows the tests to all pass. 2020-05-22 15:31:38 -04:00
Dan Funk 1017bb1897 On the tools/render_docx api, allow sending the json data in the body, rather than as a ludicrous long get parameter. Silly Dan. 2020-05-22 15:30:22 -04:00
Dan Funk 503c1c8f18 Allow disabling the Protocol Builder
PB_ENABLED can be set to false in the configuration (either in a file called instance/config.py, or as an environment variable)

Added a check in the base_test, to assure that we are always running tests with the test configuration, and bail out otherwise.  Setting TESTING=true as an environment variable will get this, but so well the correct ordering of imports. Just be dead certain the first file every test file imports is base_test.py.

Aaron was right, and we call the Protocol Builder in all kinds of awful places.  But we don't do this now.  So Carlos, you should have the ability to reuse a lot of the logic in the study_service now.

I dropped the poorly named "study-update" endpoint completely.  We weren't using it. POST and PUT to Study still work just fine for doing exactly that.

All the tests now run and pass with the Protocol builder disabled. Tests that specifically check PB behavior turn it back on for the test, or mock it out.
2020-05-22 14:37:49 -04:00
Dan Funk b490005af7 dropping the remaining config stuff for flask_sso.
updaing the user 'sso' endpoint to provide additional information for debugging.
Pulling information from ldap to stay super consistent on where we get our information.
2020-05-22 09:50:18 -04:00
Dan Funk 951710d762 ldap lookup.
Refactored calls into a new lookup_service to keep things tidy.

New keys for all enum/auto-complete fields:
    PROP_OPTIONS_FILE = "spreadsheet.name"
    PROP_OPTIONS_VALUE_COLUMN = "spreadsheet.value.column"
    PROP_OPTIONS_LABEL_COL = "spreadsheet.label.column"
    PROP_LDAP_LOOKUP = "ldap.lookup"
    FIELD_TYPE_AUTO_COMPLETE = "autocomplete"
2020-05-19 16:11:43 -04:00
Dan Funk dff92c6e4c fixing a test. 2020-05-18 20:49:50 -04:00
Dan Funk 5c5c2a7312 Assure that new lines entered in text-fields are correctly added to the final word document. 2020-05-18 11:55:10 -04:00
Dan Funk 7a380fdeb4 Forgot to fix another test and to add the example file used for a previous test. 2020-05-16 15:38:15 -04:00
Dan Funk 43bf8f5337 Fixing a bug with navigation where elements went missing past exclusive gateways on subsequent forks. 2020-05-16 15:33:06 -04:00
Dan Funk de435bd961 the heck with camel case, what the heck TypeScript? Get a grip. This is a python API. 2020-05-15 16:38:37 -04:00
Dan Funk 53255ef35e massive overhaul of the Workflow API endpoint.
No Previous Task, No Last Task, No Task List.  Just the current task, and the Navigation.
Use the token endpoint to set the current task, even if it is a "READY" task in the api.
Previous Task can be set by identifying the prior task in the Navigation (I'm hoping)
Prefering camel case to snake case on all new apis.  Maybe clean the rest up later.
2020-05-15 15:54:53 -04:00
Dan Funk b63ee8159e We now only return the ready user tasks, not all tasks, and even then the ready user tasks don't come back with the forms and details, just the bare minimum. Speeds things up considerably, and most of this information wasn't used anyway. 2020-05-14 17:13:47 -04:00
Dan Funk 6d4348d644 Fixing some failing tests. Moved the task properties into a dictionary, but moving the form field properties to a dictionary will be a larger effort that we don't want to get into on either the back or front end right this moment. 2020-05-14 14:39:14 -04:00
Dan Funk 55a1850e7c adding a navigation component to the Workflow Model.
running all extension/properties through the Jinja template processor so you can have custom display names using data, very helpful for building multi-instance displays.
Properties was returned as an array of key/value pairs, which is just mean.  Switched this to a dictionary.
2020-05-14 13:43:23 -04:00
Dan Funk b7c11fd893 Merge branch 'master' into feature/investigators_reference_file 2020-05-11 17:36:37 -04:00
Dan Funk 02f8764056 Updated to use the latest script engine / evaluation engine that creates a single location where all values used in BPMN/DMN are processed. Right now this is a python based interpreter, but we will eventually base this on FEEL expressions.
The validation process needs to take the api model into account so we catch errors with bad file names.
2020-05-11 17:04:05 -04:00
Dan Funk da7cae51b8 Adding a new reference file that provides greater details about the investigators related to a study.
Improving the study_info script documentation to provide detailed examples of values returned based on arguments.
Making the tests a little more targetted and less subject to breaking through better mocks.
Allow all tests to pass even when ther protocol builder mock isn't running locally.
Removing the duplication of reference files in tests and static, as this seems silly to me at the moment.
2020-05-07 13:57:24 -04:00
Dan Funk 1571986c0e I had to give up and live with the idea that we can only render documentation on the current task, not on the previous or next tasks. I think this is ok. If you want to view a task, you need to make it the active task to assure all the parts and pieces are in place. 2020-05-06 13:01:38 -04:00
Dan Funk 07e58e923d Merge remote-tracking branch 'origin/chore/update_specs' into feature/previous_task
# Conflicts:
#	Pipfile.lock

Assuring that all documents from the xls spreadsheet are loaded when doing validations.
2020-05-06 11:25:50 -04:00
Dan Funk 9629b36e92 Setting JSON_SORT_KEYS to false, assuring that Flask does not resort all data returned to the front end.
Updating Spiff Workflow which has some critical behavioral changes around MultiInstance.
2020-05-06 10:59:49 -04:00
Aaron Louie 4ecb0cb3a3 Updates BPMN files 2020-05-05 16:15:38 -04:00
Dan Funk 714b5f3be0 Merge branch 'feature/protocol_status' into feature/previous_task
# Conflicts:
#	crc/services/study_service.py
2020-05-04 11:08:36 -04:00
Dan Funk 2699f5c65c Refactor the stats models, and assure they are very correct across all tests with the workflow api.
I noticed we were saving the workflow every time we loaded it up, rather than only when we were making changes to it.  Refactored this to be a little more careful.
Centralized the saving of the workflow into one location in the processor, so we can make sure we update all the details about that workflow every time we save.
The workflow service has a method that will log any task action taken in a consistent way.
The stats models were removed from the API completely.  Will wait for a use case for dealing with this later.
2020-05-04 10:57:09 -04:00
Dan Funk 7c743d65ed Adding tests around parallel which pass great, but ran into a bug down in Spiff that I'm passing back to Kelly with a test in that repo. More coming on this. 2020-04-30 15:39:11 -04:00