Dan Funk
de435bd961
the heck with camel case, what the heck TypeScript? Get a grip. This is a python API.
2020-05-15 16:38:37 -04:00
Dan Funk
53255ef35e
massive overhaul of the Workflow API endpoint.
...
No Previous Task, No Last Task, No Task List. Just the current task, and the Navigation.
Use the token endpoint to set the current task, even if it is a "READY" task in the api.
Previous Task can be set by identifying the prior task in the Navigation (I'm hoping)
Prefering camel case to snake case on all new apis. Maybe clean the rest up later.
2020-05-15 15:54:53 -04:00
Dan Funk
b63ee8159e
We now only return the ready user tasks, not all tasks, and even then the ready user tasks don't come back with the forms and details, just the bare minimum. Speeds things up considerably, and most of this information wasn't used anyway.
2020-05-14 17:13:47 -04:00
Dan Funk
55a1850e7c
adding a navigation component to the Workflow Model.
...
running all extension/properties through the Jinja template processor so you can have custom display names using data, very helpful for building multi-instance displays.
Properties was returned as an array of key/value pairs, which is just mean. Switched this to a dictionary.
2020-05-14 13:43:23 -04:00
Dan Funk
e723992fde
Found a number of bugs with the parallel multi-instance - pulling in some recent changes from Spiffworkflow to open things up a bit more to allow functional jumping between tasks.
2020-05-12 12:23:43 -04:00
Dan Funk
b7c11fd893
Merge branch 'master' into feature/investigators_reference_file
2020-05-11 17:36:37 -04:00
Dan Funk
02f8764056
Updated to use the latest script engine / evaluation engine that creates a single location where all values used in BPMN/DMN are processed. Right now this is a python based interpreter, but we will eventually base this on FEEL expressions.
...
The validation process needs to take the api model into account so we catch errors with bad file names.
2020-05-11 17:04:05 -04:00
Dan Funk
da7cae51b8
Adding a new reference file that provides greater details about the investigators related to a study.
...
Improving the study_info script documentation to provide detailed examples of values returned based on arguments.
Making the tests a little more targetted and less subject to breaking through better mocks.
Allow all tests to pass even when ther protocol builder mock isn't running locally.
Removing the duplication of reference files in tests and static, as this seems silly to me at the moment.
2020-05-07 13:57:24 -04:00
Dan Funk
1571986c0e
I had to give up and live with the idea that we can only render documentation on the current task, not on the previous or next tasks. I think this is ok. If you want to view a task, you need to make it the active task to assure all the parts and pieces are in place.
2020-05-06 13:01:38 -04:00
Dan Funk
8ded625c7d
Merge remote-tracking branch 'origin/chore/update_specs' into feature/previous_task
...
# Conflicts:
# Pipfile.lock
Assuring that all documents from the xls spreadsheet are loaded when doing validations.
Fixing some failed tests.
2020-05-06 11:46:19 -04:00
Dan Funk
07e58e923d
Merge remote-tracking branch 'origin/chore/update_specs' into feature/previous_task
...
# Conflicts:
# Pipfile.lock
Assuring that all documents from the xls spreadsheet are loaded when doing validations.
2020-05-06 11:25:50 -04:00
Dan Funk
9629b36e92
Setting JSON_SORT_KEYS to false, assuring that Flask does not resort all data returned to the front end.
...
Updating Spiff Workflow which has some critical behavioral changes around MultiInstance.
2020-05-06 10:59:49 -04:00
Dan Funk
714b5f3be0
Merge branch 'feature/protocol_status' into feature/previous_task
...
# Conflicts:
# crc/services/study_service.py
2020-05-04 11:08:36 -04:00
Dan Funk
2699f5c65c
Refactor the stats models, and assure they are very correct across all tests with the workflow api.
...
I noticed we were saving the workflow every time we loaded it up, rather than only when we were making changes to it. Refactored this to be a little more careful.
Centralized the saving of the workflow into one location in the processor, so we can make sure we update all the details about that workflow every time we save.
The workflow service has a method that will log any task action taken in a consistent way.
The stats models were removed from the API completely. Will wait for a use case for dealing with this later.
2020-05-04 10:57:09 -04:00
Dan Funk
1f5002680a
Initial work on a "Previous" task.
2020-05-01 12:11:39 -04:00
Dan Funk
bec59a71d7
Deleteing stuff is a damn mess, but this is a little cleaner.
2020-04-29 16:07:39 -04:00
Dan Funk
f1f8b91c9c
Refactor the document details scripts. Now there is one script, it returns data in a consistent format, and has all the details required. The script is located in StudyInfo, with the argument documents. Make note that it returns a dictionary of ALL the documents, with a field to mark which ones are required according to the protocol builder. Others may become required if a workflow determines such, in which case the workflow will enforce this, and the document will have a count > 0, and additional details in a list of files within the document. I modified the XLS file to use lower case variable names, because it disturbed me, and we have to reference them frequently. Removed devious "as_object" variable on get_required_docs, so it behaves like the other methods all the time, and returns a dictionary. All the core business logic for finding the documents list now resides in the StudyService.
...
Because this changes the endpoint for all existing document details, I've modified all the test and static bpmn files to use the new format.
Shorting up the SponsorsList.xls file makes for slightly faster tests. seems senseless to load 5000 everytime we reset the data.
Tried to test all of this carefully in the test_study_details_documents.py test.
2020-04-29 15:08:11 -04:00
Aaron Louie
beb86f0453
Adds protocol script to study service
2020-04-29 10:21:24 -04:00
Dan Funk
3e3a249e3c
Verifying Sub-Process works, and adding a field to expose a hint as to the sub-process in which the task occurs.
...
Because the name field is now used to expose workflow/sub-process information on tasks, we can't use it to store the workflow_version, so that is now just stored on the database model. Which is much cleaner and removes a duplication.
2020-04-28 13:48:44 -04:00
Dan Funk
447f4013f8
Assure that a hard-reset sticks, and the system is properly updated.
2020-04-27 16:08:23 -04:00
Dan Funk
1b9743a4d1
Assure that if a form has an enumeration it errors out if that enumeration is empty.
2020-04-27 15:10:09 -04:00
Aaron Louie
8ed520c6f1
Removes hidden workflows
2020-04-24 09:45:55 -04:00
Aaron Louie
c85173de88
Sorts approvals by display order
2020-04-24 08:54:14 -04:00
Dan Funk
1ccedbc9fd
Merge branch 'master' of github.com:sartography/cr-connect-workflow
2020-04-24 07:01:40 -04:00
Dan Funk
12eb039bc9
Server isn't erroring out, but can't find the lookup table id in the database, so trying to use the in-memory model instead, to give things time to get to the database. Really unsure what is happening here. Hard to see in the database.
2020-04-24 07:01:32 -04:00
Aaron Louie
af1bb9f80d
Adds more useful metadata to approvals and documents status scripts. Fleshes out and pretties up Documents & Approvals screen
2020-04-23 23:32:20 -04:00
Aaron Louie
47de010a88
Puts data from sequential calls to StudyInfo into the right place. Sets the required document flag correctly.
2020-04-23 21:02:08 -04:00
Aaron Louie
d91f690388
Adds documents_status StudyInfo script. Adds Documents & Approvals workflow spec.
2020-04-23 19:25:01 -04:00
Dan Funk
08140eca17
Merge branch 'master' of github.com:sartography/cr-connect-workflow
2020-04-23 15:01:02 -04:00
Dan Funk
3aeb7ad116
Server isn't erroring out, but can't find the lookup table id in the database, so trying to use the in-memory model instead, to give things time to get to the database. Really unsure what is happening here. Hard to see in the database.
2020-04-23 14:58:17 -04:00
Aaron Louie
796c109611
Adds approvals to study service
2020-04-23 14:40:05 -04:00
Dan Funk
b5b46b7c2c
better overall search results for type ahead. Still dealing with stop words failing.
2020-04-23 12:05:08 -04:00
Dan Funk
65b29e1a9d
Don't just bomb out as soon as someone types an empty string.
2020-04-23 09:44:11 -04:00
Dan Funk
7b085c9c9d
Adding an API Endpoint that will return a list of LookupValues that match a given query - can be used to populate an auto-complete table.
2020-04-22 19:40:40 -04:00
Dan Funk
6de8c8b977
Create lookup tables for XSL files referenced in workflows so we can do full text searches and populate lists on the fly quickly.
2020-04-22 15:37:02 -04:00
Dan Funk
fd0adb1d43
Updated the study status to use a different enumeration. Migration correctly handles modifying the enum.
...
INCOMPLETE = 'Incomplete in Protocol Builder',
ACTIVE = 'Active / Ready to roll',
HOLD = 'On Hold',
OPEN = 'Open - this study is in progress',
ABANDONED = 'Abandoned, it got deleted in Protocol Builder'
2020-04-21 17:13:30 -04:00
Dan Funk
0a74bf8c44
We can now collect, and provide "extension properties" on a task as set in the camunda modeler.
...
These are provided as "properties" on a task, and are identical in structure to properties on a form field.
2020-04-21 12:07:59 -04:00
Dan Funk
ec112f52be
Make use of cleaner data provided by Spiffworkflow about multi-instance settings.
2020-04-21 11:43:43 -04:00
Dan Funk
ee999a0f15
fixing a bunch of stupid mistakes because I am tried.
2020-04-20 20:28:12 -04:00
Dan Funk
edbd75bb75
Connect LDAP Requests to the StudyInfo service so we get back additional details.
2020-04-20 16:02:13 -04:00
Dan Funk
2d3402a719
Ldap Service with Test and mocks.
...
LDAP_URL can be set in an environment variable.
2020-04-20 15:16:33 -04:00
Dan Funk
d3dd9dcc25
Functional multi-instance - works with no changes to the front end - though I've added some attributes to task so we could give people a sense of how many iterations they will go through.
2020-04-19 15:14:10 -04:00
Dan Funk
241980f98f
If you name add a file to a workflow that has the exact same name as a Task Spec's ID, and an extension of "md", it wll use that file as the markdown content, and ignore the markdown in the documentation on the task spec.
...
Moving the primary process id from the workflow model to the file model, and assuring it is updated properly. This was causing a bug that would "lose" the workflow.
2020-04-17 13:30:32 -04:00
Dan Funk
dc2895cb05
Allow configurators to upload xls files into a workflow for defining enumrations of values for dropdown lists in forms. Fixing lots of tests.
...
Found a problem where the documentation for elements was being processed BEFORE data was loaded from a script. There still may be some issues here.
Ran into an issue with circular dependencies - handling it with a new workflow_service, and pulling computational logic out of the api_models - it was the right thing to do.
2020-04-15 11:13:32 -04:00
Dan Funk
c79415a794
throw a sensible error when study is not found on get_study (don't 500)
...
some ugly fixes in the file_service for improving panda output from spreadsheet processing that I need to revist.
now that the spiff-workflow handles multi-instance, we can't have random multi-instance tasks around.
Improved tests around study deletion.
2020-04-08 13:28:43 -04:00
Aaron Louie
519a034d87
Updates last_updated when file data is saved. Returns last_updated as lastModified in response header for file data endpoint.
2020-04-08 12:58:55 -04:00
Dan Funk
c6b6ee5d70
Renamed the required_docs script to just "documents", and it returns all documented in the irb_documents look up table indexed on the "Code" - so details become available in the task data like "documents.IRB_INFOSEC_DOC.required".
...
Updated the irb_documents with shorter code names, thanks to Alex. Re-worked the DMN models so they can properly read from this new datastructure.
2020-04-06 16:56:00 -04:00
Dan Funk
e283b86466
Fixing a bug with deleting a study.
2020-04-06 13:08:17 -04:00
Dan Funk
a322801c91
Allow a study to be deleted, even if some statistics are laying around.
2020-04-03 16:41:16 -04:00
Dan Funk
60a10bb688
Marshmallow isn't the right tool when dealing with large models with lots of null values. Rather than fight the process of mamaging the Study Details, I'm letting that fall through, and we can test on an individual value or maybe set up a constants array when that becomes meaningful.
2020-04-03 16:24:38 -04:00