Commit Graph

329 Commits

Author SHA1 Message Date
Carlos Lopez 5327b469f6 Merge branch 'rrt/dev' into feature/emails-enhancement 2020-06-24 21:46:53 -06:00
Carlos Lopez dd10e56d1a Adding forgotten variables to returned dict 2020-06-22 14:56:24 -06:00
Carlos Lopez dc5ffd29d0 Refactoring shared code 2020-06-22 14:07:57 -06:00
Carlos Lopez e5541e4950 Enable CSV download 2020-06-22 09:24:58 -06:00
Carlos Lopez b8d60ca944 Spreadsheet generation 2020-06-22 07:14:00 -06:00
Dan Funk 6aec15cc7c Shifting to a different model, where the TaskEvents store ONLY the form data submitted for that task.
In order to allow proper deletion of tasks, we no longer merge data returned from the front end, we set it directly as the task_data.
When returning data to the front end, we take any previous form submission and merge it into the current task data, allowing users to keep their previous submissions.
There is now an "extract_form_data" method that does it's best job to calculate what form data might have changed from the front end.
2020-06-19 08:22:53 -04:00
Carlos Lopez 4db815a999 Handling incoming values from processor 2020-06-17 21:11:47 -06:00
Carlos Lopez e947f40ec7 Merge branch 'rrt/dev' into feature/emails-enhancement 2020-06-17 20:10:11 -06:00
Carlos Lopez 896ba6b377 Email relies now on markdown content 2020-06-17 17:00:16 -06:00
Dan Funk da048d358e Merge branch 'dev' into feature/refactor_data_loading 2020-06-17 17:14:30 -04:00
Dan Funk 3b57adb84c Continuing a major refactor. Some important points:
* TaskEvents now contain the data for each event as it was when the task was completed.
* When loading a task for the front end, if the task was completed previously, we take that data, and overwrite it with the lastest data, allowing users to see previously entered values.
* Pulling in the Admin branch, as there are changes in that branch that are critical to seeing what is happening when we do this thing.
* Moved code for converting a workflow to an API ready data stricture into the Workflow service where it belongs, and out of the API.
* Hard resets just convert to using the latest spec, they don't try to keep the data from the last task.  There is a better way.
* Moving to a previous task does not attept to keep the data from the last completed task.
* Added a function that will fix all the existing RRT data by adding critical data into the TaskEvent model. This can be called with from the flask command line tool.
2020-06-17 17:11:15 -04:00
Aaron Louie 40ff3b5fcb Uses mock LDAP if LDAP_URL environment variable is 'mock' 2020-06-16 21:43:20 -04:00
Dan Funk 1b9166dcb7 Cleaning up the merge, which resulted in some lost code. 2020-06-16 13:34:21 -04:00
Dan Funk c7d8eaff30 Merge branch 'dev' into feature/refactor_data_loading 2020-06-16 13:15:43 -04:00
Dan Funk 92580ee867 adding an additional error check for invalid json returned from the Protocol builder. 2020-06-15 12:26:53 -04:00
Dan Funk eec4b579a7 Don't error out if we don't have a valid study id when doing validation. 2020-06-15 11:27:28 -04:00
Carlos Lopez 5f13b96079 More enhancements 2020-06-12 12:17:08 -06:00
Aaron Louie 2a84d5196a filter, not filter_by 2020-06-12 14:13:27 -04:00
Aaron Louie e3126620b3 Eschews obfuscation 2020-06-12 14:09:08 -04:00
Aaron Louie 561e254315 Prevents non-admin users from editing each others' tasks. Fixes bug where test user uid was not being set from token. Moves complete form and get workflow API test utility methods into BaseTest. 2020-06-12 13:46:10 -04:00
Carlos Lopez e9e805b2c9 Storing emails in database 2020-06-09 22:57:56 -06:00
Dan Funk 777e429382 Merging production back into dev, because we got out of whack somehow. 2020-06-08 14:19:30 -04:00
Dan Funk 8cf420b781 Default mail user name and password to blank. 2020-06-08 14:15:56 -04:00
Dan Funk e370148380
Merge pull request #112 from sartography/rrt/dev
Rrt/dev
2020-06-08 13:18:41 -04:00
Carlos Lopez 0351b3548a Adding bcc to all emails sent 2020-06-08 11:17:17 -06:00
Carlos Lopez f91fbf76b9 Capturing explicit errors from mails 2020-06-08 09:16:26 -06:00
Aaron Louie af1c848f65 Merge branch 'rrt/dev' into rrt/testing 2020-06-07 19:57:09 -04:00
Dan Funk 213d3f3501 Merge branch 'feature/better_approval_status' into rrt/dev 2020-06-05 19:11:16 -04:00
Dan Funk 6861991d8f Allow setting the type of approvals you want back, by status.
Some very minor performance enhancements, that will add up on the Approvers page.
2020-06-05 17:49:55 -04:00
Aaron Louie f0904e75a6 Sets main approval status after related approvals have been populated 2020-06-05 15:54:53 -04:00
Carlos Lopez 663da57d8b Config can read smtp values from environment now 2020-06-05 13:54:37 -06:00
Carlos Lopez 57a7c7fa54 Approve/deny fixes 2020-06-05 13:39:52 -06:00
Dan Funk 1f32a99efe Some approval statuses coming back as null., fixed 2020-06-05 14:55:49 -04:00
Carlos Lopez 16ca4fa2c0 Merge branch 'rrt/dev' into feature/mail-system 2020-06-05 12:35:05 -06:00
Dan Funk f0db5b70fc Adding some additional logic to the approval endpoint so that we take related approvals into account when setting the status. In addtion to prevous status options, there is a new status of "AWAITING" which means there are pending approvals before this approval that still need to be approved or canceled. 2020-06-05 14:33:00 -04:00
Carlos Lopez 4fc1b51cbc Approve/denied emails 2020-06-05 12:08:31 -06:00
Carlos Lopez 4727d87adb Hooking up emails into process - start 2020-06-04 20:37:28 -06:00
Dan Funk b6abb0cbe2 using a restartable strategy to get around login errors 2020-06-04 18:03:59 -04:00
Dan Funk 9cfe00dfd0 Don't bind all the time. 2020-06-04 15:38:45 -04:00
Dan Funk fed6e86f92 Trying to fix LDAP issues on production. Changing LDAP to static only methods, caching the connection and calling bind before all connection requests.
Also assuring we don't load the documents.xls file over and over again.
2020-06-04 14:59:36 -04:00
Carlos Lopez 8c36d9f367 Email calls outline 2020-06-04 11:43:10 -06:00
Dan Funk 50d2acac9c Made a very stupid mistake with LDAP connections, pushing up quickly to production. 2020-06-04 11:57:00 -04:00
Dan Funk 68aeaf1273 BE VERY CAREFUL where you create a new LdapService() - construction is expensive.
Adding a few more details to the "csv" endpoint for RRT.
2020-06-04 10:33:17 -04:00
Dan Funk bbcbfef1ba Fixing the migrations so I don't break the universe. 2020-06-04 10:09:36 -04:00
Dan Funk 1324533865 Some additional cleanup - when a file is "archived" it is no longer returned for any endpoints about files, but it
is directly accessible via id, in the event some request is made for it at a later date.
2020-06-04 09:49:42 -04:00
Carlos Lopez f581bd9f2b Mails for approval process 2020-06-04 00:35:59 -06:00
Dan Funk 217ecfc911 When you can't delete a file, mark it as archived. Don't include archived files in new approval requests. 2020-06-03 17:34:27 -04:00
Dan Funk c179c7781b Do not process or return cancelled approvals via the API. 2020-06-03 16:50:47 -04:00
Dan Funk e102214809 minor cleanup of error codes. 2020-06-03 15:03:22 -04:00
Dan Funk 299ad4fc8b Adding more details to the csv output, and assuring we don't miss people with outstanding approvals that were cancelled. 2020-06-03 07:58:48 -04:00
Dan Funk c7484267e1 For the main approval endpoints - we now group the approvals by study. So you get one record back for each study, but it may have other approvals along with it as "related_approvals".
We now cache the LDAP records - so we look in our own database for the record before calling out to ldap for the details when given a straight up computing id like dhf8r.

Added "date_approved" to the approval model.

And moved the approver and primary investigator into real associated models to make it easier to dump.

Fixed a problem with the validation that was causing it to throw incorrect errors on valid workflows. Getting it to behave a little more like the front end behaves, and respecting the read-only fields.  But it was mainly to do with always returning all the data with each form submission.
2020-06-02 18:17:00 -04:00
Dan Funk 1db9401166 Don't put all the data into Spiff Tasks on a reload or backtrack, just store the data that gets submitted each time in the task log, and use that.
This should correct issues with parallel tasks and other complex areas - so we don't have tasks seeing data that isn't along their path.
2020-06-01 17:42:39 -04:00
Dan Funk b6bf843f6e used 'name' rather than 'value' in the lookup options during validation, causing a disconnect with how this is processed on the front end. 2020-06-01 11:00:56 -04:00
Aaron Louie 7c8b7829ea Merge branch 'rrt/dev' into feature/approvals_enhancements 2020-06-01 00:41:51 -04:00
Carlos Lopez bec11980eb Fixing broken test by using proper FileSchema 2020-05-31 22:00:52 -06:00
Carlos Lopez b2e56f797b Converting ApprovalModel to Approval in order to serialize properly the result 2020-05-31 21:02:47 -06:00
Aaron Louie f0bd8d4f9e Adds approvals to study schema. Adds approvals endpoint 2020-05-31 22:46:32 -04:00
Dan Funk 9c7de39b09 This adds additional file data details to the study model as well. 2020-05-31 21:15:40 -04:00
Carlos Lopez 26809d1470 Waiting status renaming 2020-05-31 13:35:42 -06:00
Dan Funk 2bc735a3f0 Kelly wrote a beautiful method for resetting the workflow that doesn't loose data when reset inside a parallel task, all I needed to do was use it.
Catching TypeErrors and reporting them back to the UI so we don't 500 in a bad way (but we still 500)
2020-05-31 13:48:00 -04:00
Dan Funk 98fb305868 Run the validation process twice, each time it is requested, first populating everything, and then a second time
using on the required form fields.
2020-05-30 18:43:20 -04:00
Dan Funk 1f0e8741ba Run the validation twice, once completing all of the data, and a second time, completing only the required fields.
Also, add a helper method to reduce boiler plate code around Workflow Exceptions.
2020-05-30 17:26:27 -04:00
Dan Funk 860c475b29 Fill out repeating sections during validation process.
Also, when returning error messages, attempt to include the task data for the task that caused the error.
Also, when attempting to delete any file, respond with an API error explaining the issue, and log the details.
2020-05-30 15:37:04 -04:00
Dan Funk 4e4cc7884c Better ldap searching. 2020-05-29 15:17:51 -04:00
Dan Funk afb6be7c60 re-working the way the redirects function, so we pass arguments as a get parameter. Just trying to get rid of the weird lag on production.
I noticed the validation sometimes looks ahead for files, so looking at all the tasks now, not just the ready tasks for the lookup field.
Ran into an issue with validation where a workflow model was required, so I create one and delete it.  Another refactor for another day.
2020-05-29 04:42:48 -04:00
Dan Funk 11413838a7 Faster lookup fields. We were parsing the spec each time to get details about how to search. We're just grabbing the workflow id and task id now and building that straight into the full text search index for faster lookups. Should be peppy.
Another speed improvement - data in the FileDataModel is deferred, and not queried until it is specifically used, as the new data structures need to use this model frequently.
2020-05-29 01:39:39 -04:00
Dan Funk aea78de066 Merge remote-tracking branch 'origin/dev' into feature/file_refactor_part2 2020-05-28 20:06:32 -04:00
Dan Funk dba41f4759 Ludicrously stupid launch in a refactor of the way all files work in the system at a time where I crave sleep and peace above all other things.
Added a File class, that we wrap around the FileModel so the api endpoints don't change, but File no longer holds refences to versions or dates of the file_data model, we
figure this out based on a clean database structure.

The ApprovalFile is directly related to the file_data_model - so no chance that a reviewer would review the incorrect version of a file.py

Noticed that our FileType enum called "bpmn" "bpmm", hope this doesn't screw someone up.

Workflows are directly related to the data_models that create the workflow spec it needs.  So the files should always be there.  There are no more hashes, and thus no more hash errors where it can't find the files to rebuild the workflow.py

Not much to report here, other than I broke every single test in the system at one point.  So I'm super concerned about this, and will be testing it a lot before creating the pull request.
2020-05-28 20:03:50 -04:00
Dan Funk cd7f67ab48 A major refactor of how we search and store files, as there was a lot of confusing bits in here.
From an API point of view you can do the following (and only the following)

/files?workflow_spec_id=x
* You can find all files associated with a workflow_spec_id, and add a file with a workflow_spec_id
/files?workflow_id=x
* You can find all files associated with a workflow_id, and add a file that is directly associated with the workflow
/files?workflow_id=x&form_field_key=y
* You can find all files associated with a form element on a running workflow, and add a new file.
   Note: you can add multiple files to the same form_field_key, IF they have different file names. If the same name, the original file is archived,
   and the new file takes its place.

The study endpoints always return a list of the file metadata associated with the study.  Removed /studies-files, but there is an
endpoint called

/studies/all  - that returns all the studies in the system, and does include their files.

On a deeper level:
 The File model no longer contains:
  - study_id,
  - task_id,
  - form_field_key

Instead, if the file is associated with workflow - then that is the one way it is connected to the study, and we use this relationship to find files for a study.
A file is never associated with a task_id, as these change when the workflow is reloaded.
The form_field_key must match the irb_doc_code, so when requesting files for a form field, we just look up the irb_doc_code.
2020-05-28 08:27:26 -04:00
Aaron Louie 97cdbfce94 Deletes extra line break 2020-05-27 23:48:48 -04:00
Dan Funk 77f72e408f Lookup Service now raises exact matches to the top. Very hackish, but it works. 2020-05-27 14:36:10 -04:00
Dan Funk d5e075db82 Order search results by relevancy in the lookup service. 2020-05-27 09:47:44 -04:00
Dan Funk 397cb23b52 Is true "true", yes it is true. So true, is "false", so true, it is true. 2020-05-26 23:38:57 -04:00
Dan Funk 0025931a2e Trying hard to figure out why the DCOS servers think the protocol builder is enabled. 2020-05-26 23:18:14 -04:00
Dan Funk 7869fa596e Protocol Builder isn't disabled on the dcos servers, trying to figure out why, and assure it isn't some sort of weird race condition. 2020-05-26 22:42:49 -04:00
Dan Funk ccbf374b40 Loads of bug fixes.
Modifed the request_approval to take a list of arguments, which works better for us... today.
UpdateStudy correctly handles validation.
WorkflowService correctly populates random values from lookup tables.
And several fixes down in Spiffworkflow, including a big bug where only the last item in a decision table made it through.
2020-05-26 20:06:50 -04:00
Dan Funk 13186176ba Improved LDAP searches, allow filtering on last name as well as uva id. 2020-05-25 16:00:36 -04:00
Dan Funk be057e8758 Adding an "UpdateStudy" task that is able to update the data on the study model, useful for setting core data points on the model, such as setting the Primary Investigator, or altering the Study Title.
Fixing a bug where the validation of forms did not correctly process auto-complete fields.
Fixing a bug where the approvals script and the update study script could not process dot notation correctly.

Moved populate_random_data into the WorkflowService where it makes more sense.
2020-05-25 15:30:06 -04:00
Dan Funk 6cd4ef64d1 Fixing add_study api endpoint, so you can actually add a new "Study" with just some basic information.
Using the LDAP service for checking user details in development mode - even if you are using the back door.
Added a new Flask fucntion load-example-rrt-data that loads the rrt workflow, and not the CRC wrokflows.
Modified the "load-example-data" in the tests to use some test data, rather than loading up all the workflows[
in CRC each time, with a parameter to load crc data if that is required - which is enabled for just a handful of tests.
(Tests run in 1/4 the time now)
2020-05-25 12:29:05 -04:00
Dan Funk 971d9a58e9 As we now have an approval_service.py, I moved all the business logic into this service and out of the request_approval.py script. And moved all tests for these features into a test file for the service. Will make it easier to cross reference what is happening, as everything all happens in one file.
As many of the scripts need to know the workflow, and it's down in a weird parameter, moved this so it is passed in each time.
2020-05-24 16:13:15 -04:00
Carlos Lopez 49eb4b3f98 Making working endpoints for approvals 2020-05-23 23:53:48 -06:00
Dan Funk d5c91e575f stuff that might be broken. 2020-05-23 15:21:30 -04:00
Dan Funk d39ef658a2 Made some modifications to the Approval so that it knows exactly what versions of every file are being sent for approval
Added the following columns:
  * date_created - so we know when the file was created
  * renamed workflow_version to just "version", because everything has a version,  this is the version of the request.
  * workflow_hash - this is just a quick way to see what files and versions are associated with the request, it could be factored out.
  * study - a quick relationship link to the study, so that this model is easier to use.
  * workflow - ditto
  * approval_files - these is a list from a new link table that links an approval to specific files and versions.

The RequestApproval is logically sound, but still needs some additional pieces in place to be callable from a BPMN workflow diagram.

Altered the file service to pick up on changes to files vs adding new files, so that versions are picked up correctly as
users modify their submission - adding new files or replacing existing ones.  Deleting files worries me, and I will need to revisit this.

The damn base test keeps giving me a headache, so I made changes there to see if clearing and dropping the database each time won't allow the tests to pass more consistently.

Lots more tests around the file service to make sure it is versioning user uploaded files correctly.

The "Test Request Approval Script" tries to find to assure the correct behavior as this is likely to be called many times repeatedly and with little knowledge of the internal system.  So it should just "do the right thing".
2020-05-23 15:08:17 -04:00
Dan Funk 571c1d7d24 Merge branch 'feature/rrp-endpoints' into feature/disable_protocol_builder 2020-05-22 16:18:33 -04:00
Dan Funk 503c1c8f18 Allow disabling the Protocol Builder
PB_ENABLED can be set to false in the configuration (either in a file called instance/config.py, or as an environment variable)

Added a check in the base_test, to assure that we are always running tests with the test configuration, and bail out otherwise.  Setting TESTING=true as an environment variable will get this, but so well the correct ordering of imports. Just be dead certain the first file every test file imports is base_test.py.

Aaron was right, and we call the Protocol Builder in all kinds of awful places.  But we don't do this now.  So Carlos, you should have the ability to reuse a lot of the logic in the study_service now.

I dropped the poorly named "study-update" endpoint completely.  We weren't using it. POST and PUT to Study still work just fine for doing exactly that.

All the tests now run and pass with the Protocol builder disabled. Tests that specifically check PB behavior turn it back on for the test, or mock it out.
2020-05-22 14:37:49 -04:00
Carlos Lopez a0c884499e Merge branch 'feature/rrp-endpoints' of github.com:sartography/cr-connect-workflow into feature/rrp-endpoints 2020-05-22 09:46:25 -06:00
Carlos Lopez 1ed7930aab Endpoint for studies with files 2020-05-22 09:46:03 -06:00
Dan Funk b490005af7 dropping the remaining config stuff for flask_sso.
updaing the user 'sso' endpoint to provide additional information for debugging.
Pulling information from ldap to stay super consistent on where we get our information.
2020-05-22 09:50:18 -04:00
Aaron Louie 48f9873548 Adding yet another flush, because Travis builds keep failing due to database race condition issues in this method. 2020-05-20 10:02:30 -04:00
Aaron Louie 58189285ad Cleans up 2020-05-20 00:12:48 -04:00
Aaron Louie 2b5687c3a3 Fixes pernicious bug where template document versions were not being updated properly, and template completion script was not honoring version numbers 2020-05-20 00:10:32 -04:00
Aaron Louie 481a9ed04c Gets image ids from task data and Injects images into jinja docx 2020-05-19 21:51:54 -04:00
Dan Funk c4f2bd8dc6 Quick cleanup, adding a space 2020-05-19 16:23:20 -04:00
Dan Funk 951710d762 ldap lookup.
Refactored calls into a new lookup_service to keep things tidy.

New keys for all enum/auto-complete fields:
    PROP_OPTIONS_FILE = "spreadsheet.name"
    PROP_OPTIONS_VALUE_COLUMN = "spreadsheet.value.column"
    PROP_OPTIONS_LABEL_COL = "spreadsheet.label.column"
    PROP_LDAP_LOOKUP = "ldap.lookup"
    FIELD_TYPE_AUTO_COMPLETE = "autocomplete"
2020-05-19 16:11:43 -04:00
Dan Funk de435bd961 the heck with camel case, what the heck TypeScript? Get a grip. This is a python API. 2020-05-15 16:38:37 -04:00
Dan Funk 53255ef35e massive overhaul of the Workflow API endpoint.
No Previous Task, No Last Task, No Task List.  Just the current task, and the Navigation.
Use the token endpoint to set the current task, even if it is a "READY" task in the api.
Previous Task can be set by identifying the prior task in the Navigation (I'm hoping)
Prefering camel case to snake case on all new apis.  Maybe clean the rest up later.
2020-05-15 15:54:53 -04:00
Dan Funk b63ee8159e We now only return the ready user tasks, not all tasks, and even then the ready user tasks don't come back with the forms and details, just the bare minimum. Speeds things up considerably, and most of this information wasn't used anyway. 2020-05-14 17:13:47 -04:00
Dan Funk 55a1850e7c adding a navigation component to the Workflow Model.
running all extension/properties through the Jinja template processor so you can have custom display names using data, very helpful for building multi-instance displays.
Properties was returned as an array of key/value pairs, which is just mean.  Switched this to a dictionary.
2020-05-14 13:43:23 -04:00
Dan Funk e723992fde Found a number of bugs with the parallel multi-instance - pulling in some recent changes from Spiffworkflow to open things up a bit more to allow functional jumping between tasks. 2020-05-12 12:23:43 -04:00
Dan Funk b7c11fd893 Merge branch 'master' into feature/investigators_reference_file 2020-05-11 17:36:37 -04:00
Dan Funk 02f8764056 Updated to use the latest script engine / evaluation engine that creates a single location where all values used in BPMN/DMN are processed. Right now this is a python based interpreter, but we will eventually base this on FEEL expressions.
The validation process needs to take the api model into account so we catch errors with bad file names.
2020-05-11 17:04:05 -04:00
Dan Funk da7cae51b8 Adding a new reference file that provides greater details about the investigators related to a study.
Improving the study_info script documentation to provide detailed examples of values returned based on arguments.
Making the tests a little more targetted and less subject to breaking through better mocks.
Allow all tests to pass even when ther protocol builder mock isn't running locally.
Removing the duplication of reference files in tests and static, as this seems silly to me at the moment.
2020-05-07 13:57:24 -04:00
Dan Funk 1571986c0e I had to give up and live with the idea that we can only render documentation on the current task, not on the previous or next tasks. I think this is ok. If you want to view a task, you need to make it the active task to assure all the parts and pieces are in place. 2020-05-06 13:01:38 -04:00
Dan Funk 8ded625c7d Merge remote-tracking branch 'origin/chore/update_specs' into feature/previous_task
# Conflicts:
#	Pipfile.lock

Assuring that all documents from the xls spreadsheet are loaded when doing validations.
Fixing some failed tests.
2020-05-06 11:46:19 -04:00
Dan Funk 07e58e923d Merge remote-tracking branch 'origin/chore/update_specs' into feature/previous_task
# Conflicts:
#	Pipfile.lock

Assuring that all documents from the xls spreadsheet are loaded when doing validations.
2020-05-06 11:25:50 -04:00
Dan Funk 9629b36e92 Setting JSON_SORT_KEYS to false, assuring that Flask does not resort all data returned to the front end.
Updating Spiff Workflow which has some critical behavioral changes around MultiInstance.
2020-05-06 10:59:49 -04:00
Dan Funk 714b5f3be0 Merge branch 'feature/protocol_status' into feature/previous_task
# Conflicts:
#	crc/services/study_service.py
2020-05-04 11:08:36 -04:00
Dan Funk 2699f5c65c Refactor the stats models, and assure they are very correct across all tests with the workflow api.
I noticed we were saving the workflow every time we loaded it up, rather than only when we were making changes to it.  Refactored this to be a little more careful.
Centralized the saving of the workflow into one location in the processor, so we can make sure we update all the details about that workflow every time we save.
The workflow service has a method that will log any task action taken in a consistent way.
The stats models were removed from the API completely.  Will wait for a use case for dealing with this later.
2020-05-04 10:57:09 -04:00
Dan Funk 1f5002680a Initial work on a "Previous" task. 2020-05-01 12:11:39 -04:00
Dan Funk bec59a71d7 Deleteing stuff is a damn mess, but this is a little cleaner. 2020-04-29 16:07:39 -04:00
Dan Funk f1f8b91c9c Refactor the document details scripts. Now there is one script, it returns data in a consistent format, and has all the details required. The script is located in StudyInfo, with the argument documents. Make note that it returns a dictionary of ALL the documents, with a field to mark which ones are required according to the protocol builder. Others may become required if a workflow determines such, in which case the workflow will enforce this, and the document will have a count > 0, and additional details in a list of files within the document. I modified the XLS file to use lower case variable names, because it disturbed me, and we have to reference them frequently. Removed devious "as_object" variable on get_required_docs, so it behaves like the other methods all the time, and returns a dictionary. All the core business logic for finding the documents list now resides in the StudyService.
Because this changes the endpoint for all existing document details, I've modified all the test and static bpmn files to use the new format.
Shorting up the SponsorsList.xls file makes for slightly faster tests. seems senseless to load 5000 everytime we reset the data.
Tried to test all of this carefully in the test_study_details_documents.py test.
2020-04-29 15:08:11 -04:00
Aaron Louie beb86f0453 Adds protocol script to study service 2020-04-29 10:21:24 -04:00
Dan Funk 3e3a249e3c Verifying Sub-Process works, and adding a field to expose a hint as to the sub-process in which the task occurs.
Because the name field is now used to expose workflow/sub-process information on tasks, we can't use it to store the workflow_version, so that is now just stored on the database model.  Which is much cleaner and removes a duplication.
2020-04-28 13:48:44 -04:00
Dan Funk 447f4013f8 Assure that a hard-reset sticks, and the system is properly updated. 2020-04-27 16:08:23 -04:00
Dan Funk 1b9743a4d1 Assure that if a form has an enumeration it errors out if that enumeration is empty. 2020-04-27 15:10:09 -04:00
Aaron Louie 8ed520c6f1 Removes hidden workflows 2020-04-24 09:45:55 -04:00
Aaron Louie c85173de88 Sorts approvals by display order 2020-04-24 08:54:14 -04:00
Dan Funk 1ccedbc9fd Merge branch 'master' of github.com:sartography/cr-connect-workflow 2020-04-24 07:01:40 -04:00
Dan Funk 12eb039bc9 Server isn't erroring out, but can't find the lookup table id in the database, so trying to use the in-memory model instead, to give things time to get to the database. Really unsure what is happening here. Hard to see in the database. 2020-04-24 07:01:32 -04:00
Aaron Louie af1bb9f80d Adds more useful metadata to approvals and documents status scripts. Fleshes out and pretties up Documents & Approvals screen 2020-04-23 23:32:20 -04:00
Aaron Louie 47de010a88 Puts data from sequential calls to StudyInfo into the right place. Sets the required document flag correctly. 2020-04-23 21:02:08 -04:00
Aaron Louie d91f690388 Adds documents_status StudyInfo script. Adds Documents & Approvals workflow spec. 2020-04-23 19:25:01 -04:00
Dan Funk 08140eca17 Merge branch 'master' of github.com:sartography/cr-connect-workflow 2020-04-23 15:01:02 -04:00
Dan Funk 3aeb7ad116 Server isn't erroring out, but can't find the lookup table id in the database, so trying to use the in-memory model instead, to give things time to get to the database. Really unsure what is happening here. Hard to see in the database. 2020-04-23 14:58:17 -04:00
Aaron Louie 796c109611 Adds approvals to study service 2020-04-23 14:40:05 -04:00
Dan Funk b5b46b7c2c better overall search results for type ahead. Still dealing with stop words failing. 2020-04-23 12:05:08 -04:00
Dan Funk 65b29e1a9d Don't just bomb out as soon as someone types an empty string. 2020-04-23 09:44:11 -04:00
Dan Funk 7b085c9c9d Adding an API Endpoint that will return a list of LookupValues that match a given query - can be used to populate an auto-complete table. 2020-04-22 19:40:40 -04:00
Dan Funk 6de8c8b977 Create lookup tables for XSL files referenced in workflows so we can do full text searches and populate lists on the fly quickly. 2020-04-22 15:37:02 -04:00
Dan Funk fd0adb1d43 Updated the study status to use a different enumeration. Migration correctly handles modifying the enum.
INCOMPLETE = 'Incomplete in Protocol Builder',
  ACTIVE = 'Active / Ready to roll',
  HOLD = 'On Hold',
  OPEN = 'Open - this study is in progress',
  ABANDONED = 'Abandoned, it got deleted in Protocol Builder'
2020-04-21 17:13:30 -04:00
Dan Funk 0a74bf8c44 We can now collect, and provide "extension properties" on a task as set in the camunda modeler.
These are provided as "properties" on a task, and are identical in structure to properties on a form field.
2020-04-21 12:07:59 -04:00
Dan Funk ec112f52be Make use of cleaner data provided by Spiffworkflow about multi-instance settings. 2020-04-21 11:43:43 -04:00
Dan Funk ee999a0f15 fixing a bunch of stupid mistakes because I am tried. 2020-04-20 20:28:12 -04:00
Dan Funk edbd75bb75 Connect LDAP Requests to the StudyInfo service so we get back additional details. 2020-04-20 16:02:13 -04:00
Dan Funk 2d3402a719 Ldap Service with Test and mocks.
LDAP_URL can be set in an environment variable.
2020-04-20 15:16:33 -04:00
Dan Funk d3dd9dcc25 Functional multi-instance - works with no changes to the front end - though I've added some attributes to task so we could give people a sense of how many iterations they will go through. 2020-04-19 15:14:10 -04:00
Dan Funk 241980f98f If you name add a file to a workflow that has the exact same name as a Task Spec's ID, and an extension of "md", it wll use that file as the markdown content, and ignore the markdown in the documentation on the task spec.
Moving the primary process id from the workflow model to the file model, and assuring it is updated properly.  This was causing a bug that would "lose" the workflow.
2020-04-17 13:30:32 -04:00
Dan Funk dc2895cb05 Allow configurators to upload xls files into a workflow for defining enumrations of values for dropdown lists in forms. Fixing lots of tests.
Found a problem where the documentation for elements was being processed BEFORE data was loaded from a script.  There still may be some issues here.

Ran into an issue with circular dependencies - handling it with a new workflow_service, and pulling computational logic out of the api_models - it was the right thing to do.
2020-04-15 11:13:32 -04:00
Dan Funk c79415a794 throw a sensible error when study is not found on get_study (don't 500)
some ugly fixes in the file_service for improving panda output from spreadsheet processing that I need to revist.
now that the spiff-workflow handles multi-instance, we can't have random multi-instance tasks around.
Improved tests around study deletion.
2020-04-08 13:28:43 -04:00
Aaron Louie 519a034d87 Updates last_updated when file data is saved. Returns last_updated as lastModified in response header for file data endpoint. 2020-04-08 12:58:55 -04:00
Dan Funk c6b6ee5d70 Renamed the required_docs script to just "documents", and it returns all documented in the irb_documents look up table indexed on the "Code" - so details become available in the task data like "documents.IRB_INFOSEC_DOC.required".
Updated the irb_documents with shorter code names, thanks to Alex. Re-worked the DMN models so they can properly read from this new datastructure.
2020-04-06 16:56:00 -04:00
Dan Funk e283b86466 Fixing a bug with deleting a study. 2020-04-06 13:08:17 -04:00
Dan Funk a322801c91 Allow a study to be deleted, even if some statistics are laying around. 2020-04-03 16:41:16 -04:00
Dan Funk 60a10bb688 Marshmallow isn't the right tool when dealing with large models with lots of null values. Rather than fight the process of mamaging the Study Details, I'm letting that fall through, and we can test on an individual value or maybe set up a constants array when that becomes meaningful. 2020-04-03 16:24:38 -04:00
Dan Funk 785918cb7f Be sure the validation process examines the data located in the documentation and correctly handles boolean fields. 2020-04-02 14:47:20 -04:00
Dan Funk 17796193de fixing a bug that was causing failing tests.
Adding id and spec_version to the workflow metadata.
Refactoring the processing of the master_spec so that it doesn't polute the workflow database.
Adding tests to assure that the status and counts are updated on the workflow model as users make progress.
2020-03-30 14:01:57 -04:00
Dan Funk 4a916c1ee3 Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.

Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.

The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.

Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.

Example data no longer creates users or studies, it just creates the specs.
2020-03-30 08:00:16 -04:00
Dan Funk c9900d787e Every good deed goes punished. 2020-03-27 15:48:21 -04:00
Dan Funk c7d2c28178 Vastly more informative ApiError model that provides details on the underlying task where the error occured.
Added a validate_workflow_specification endpoint that allows you to check if the workflow will execute from beginning to end using random data.
Minor fixes to existing bpmns to allow them to pass.
All scripts must include a "do_task_validate_only" that restricts external calls and database modifications, but performs as much logic as possible.
2020-03-27 08:29:31 -04:00
Dan Funk b35427523d Merge remote-tracking branch 'origin/master' into feature/reference_files
# Conflicts:
#	crc/models/file.py
#	crc/services/file_service.py
#	tests/data/reference/irb_documents.xlsx
#	tests/test_files_api.py
2020-03-20 11:07:55 -04:00
Dan Funk 3eb1167b8e Found a few errors in the sqlalchemy file definition that was causing failures, and had some consistency problems with the IRB Categories file name. The API was bailing out because we had restricted file types to bpmn,svg,or dmn in the connexion config file, I don't restrict this anymore we have plenty of checks elsewhere. Adding xlrd as a dependency - this didn't fail till a push to production. 2020-03-20 08:21:21 -04:00
Dan Funk 6e3b6c2635 Assure that files uploaded through web forms and files generated from templates can be cross-referenced to known document requirements from the protocol builder. Configurators can control this by managing an XLS Spreadsheet called "irb_documents.xslx".
Required Documents is becoming complicated, so making this it's own script task, removing it from study_info.py
The file_service is now very aware of this irb_documents file, so it will always need to exist.  We seed this file
during setup, but it can be overwritten by the configurator.
2020-03-19 17:14:20 -04:00
Dan Funk dbe6701bb2 Removing the doc_types from the protocol builder, as these will eventually contradict what is coming from IRB and should not be used as an reference. Also fixing a failing test and assuring that only one reference file ever exists with a given name. 2020-03-19 10:40:07 -04:00
Dan Funk 83d859fd3a Just merging stuff real quick. 2020-03-18 17:03:36 -04:00
Dan Funk 02be8ede75 Merge remote-tracking branch 'origin/master' into feature/reference_files 2020-03-18 15:16:34 -04:00
Aaron Louie 0cc98616fd Merge branch 'master' into feature/workflow_spec_categories 2020-03-16 10:25:03 -04:00
Aaron Louie cd6a70b747 Fixes code smell issues identified by SonarCloud 2020-03-16 08:31:19 -04:00
Dan Funk 779674ab60 Add the ability to upload and request general reference files by name. These will be used across workflows and will frequently contain lookup tables that can be referenced by various script tasks. 2020-03-13 15:03:57 -04:00
Aaron Louie 902dba7191 Adds is_status flag to workflow specs 2020-03-13 14:56:46 -04:00
Dan Funk 05b39df745 Fixes #12: Catching some specific common errors and re-raising as APIErrors with detailed codes and descriptions to improve debugging. In doing so, improving the error handling in the event a soft-reset causes an immediate error - and resetting to the original version of the specification in these events, to allow users the chance to try a hard reset instead. 2020-03-11 16:33:18 -04:00
Dan Funk 906bacff6a Expose a flag on the workflow model in the api to shown if it is using the latest spec. Added a soft_reset and hard_reset onto the workflow endpoint that will allow you to cause a hard or soft reset. 2020-03-05 16:45:44 -05:00
Dan Funk 7b21b78987 Workflow Processor will deserialize workflows using the version of the BPMN files used during creation, but allows for both a soft and hard reset - soft resets will use the new workflow without a restart and will retain full history. A hard-reset will restart the workflow from scratch, but will retain the data from the last completed task. Workflows have a complex version number associated with them that is used during the deserialization to find the correct files. 2020-03-05 15:38:30 -05:00
Dan Funk 697d930eab Modify the workflow processor to accept a workflow model - so it can take on more of the responsibilities of updating this model and managing versions.
Changing the version information so that it includes the numbers of the files used to generate the serialized workflow.
2020-03-05 13:25:28 -05:00
Dan Funk 70611e2c1d Adding the version of the specification used to create a workflow to the workflow api endpoint. Though the exact content of this version is likely to change.
Split the API specific models out from the workflow models to help me keep this straight.
Added tests to help me understand the errors thrown the and resolution path when a workflow specification changes in the midst of a running workflow.
2020-03-05 11:18:20 -05:00
Dan Funk 78b6f040eb Add the ability to forcibly restart a workflow, while retaining that workflows data.
A workflow specification knows it's version number, which is generated by the version of the files that make it up.
A workflow specification version number is the primary file (the lead BPMN) followed by a consistency ordered version each extra file associated with the workflow.  A change in any file modifies the specifications version.
2020-03-04 17:08:45 -05:00
Dan Funk c5cee4761e Improve version handling of files. Consolidate more of this logic in FileService. Place the version on the actual data model, not the file model, so the file model remains the same, and we just version the data associated with it. 2020-03-04 13:40:25 -05:00
Dan Funk 7194d7d374 Standardizing the script tasks that can be executed on the server, adding tons of error messages for when things go wrong. All scripts must exist in side of the crc/scripts directory.
Adding a new script that script tasks can use to add in data about the study.

Moving all the test workflow specifications out of the main load.

fixing a pile of tests so they can find workflow specs that are now moved into the test directory.
2020-03-03 13:52:45 -05:00
Aaron Louie 8611a23ad3 Renaming to snake case for consistency 2020-02-28 11:54:11 -05:00
Aaron Louie 0cc59d0974 Adds study inactive flag. Sets study to inactive if not found in Protocol Builder. 2020-02-27 11:17:58 -05:00
Aaron Louie 27d7afb656 Adds Protocol Builder models and schemas. Reorganizes and cleans up some files. 2020-02-27 09:54:46 -05:00
Aaron Louie 3ef4860391 Adds user_uid and investigator_ids fields to Study. Gets studies from protocol builder and adds them if they aren't already in the database 2020-02-26 18:06:51 -05:00
Dan Funk 1e8a095760 Fixing a rogue comma that made something a tuple and not a string, which drives me CRAZY. 2020-02-25 12:01:25 -05:00
Dan Funk 2cc6010c8d Protocol builder connections 2020-02-20 13:30:04 -05:00
Dan Funk 1a9b5b50e5 Merge branch 'master' of github.com:sartography/cr-connect-workflow 2020-02-18 16:39:11 -05:00
Dan Funk a642593e3d Adding support to handle Single Sign On (Shibboleth) authentication using Flask SSO and an attribute map that has worked in the past with UVA's implementation. Aside from the new user endpoint, nothing requires authentication, but soon everything will expect it. I'm setting up a backdoor we can use for development and staging that will cause a round-robin affair that should make this relatively painless. Dropped "RestException" as we had two ways or raising errors, and that was silly. 2020-02-18 16:38:56 -05:00
Aaron Louie b0b1a6e5e8 Saves form field key 2020-02-11 15:03:25 -05:00
Dan Funk 709bae76b2 Removing a rogue comma that was causing havoc. Also, don't fail if a mock already exists in the test database. 2020-02-11 11:11:21 -05:00
Dan Funk 9f0eb8477a Fix for a bug in the File service where it was being overly restrictive. 2020-02-10 16:27:57 -05:00
Dan Funk 1d24ebe382 Provide a script for generating word documents from template files. Refractored file managment into a service to make it easier to programatically add files. Modified the workflow_processor to inject the study_id and workflow_id into the running workflow so that this meta-information is avialable at the task level. 2020-02-10 16:19:23 -05:00