Commit Graph

8 Commits

Author SHA1 Message Date
Kelly McDonald c4a15be90c Change dmn files to python standard,
NB: this means mostly false=>False and true=>True

We may have to decide if we want to add false and true as extensions in the python namespace.
2020-08-12 10:48:59 -04:00
Kelly McDonald a124e13c6a Replace all legacy style calls with new calls.
Still having issues where we try to eval an empty definition, not quite sure why there is a difference from what we had before. I may need to revert some of it and determine what is going on.
2020-07-24 14:33:24 -04:00
Kelly McDonald de54b63e20 Process scripts with no shebang (#!) as a regular python script. If there is a shebang, we look up the class as we did before.
I've also made it so that it falls back if we accidentally forget to add a shebang to a study as this would be a breaking change.

With the fallback feature, it should work with unmodified bpmn documents.
2020-07-17 10:56:04 -04:00
Dan Funk f1f8b91c9c Refactor the document details scripts. Now there is one script, it returns data in a consistent format, and has all the details required. The script is located in StudyInfo, with the argument documents. Make note that it returns a dictionary of ALL the documents, with a field to mark which ones are required according to the protocol builder. Others may become required if a workflow determines such, in which case the workflow will enforce this, and the document will have a count > 0, and additional details in a list of files within the document. I modified the XLS file to use lower case variable names, because it disturbed me, and we have to reference them frequently. Removed devious "as_object" variable on get_required_docs, so it behaves like the other methods all the time, and returns a dictionary. All the core business logic for finding the documents list now resides in the StudyService.
Because this changes the endpoint for all existing document details, I've modified all the test and static bpmn files to use the new format.
Shorting up the SponsorsList.xls file makes for slightly faster tests. seems senseless to load 5000 everytime we reset the data.
Tried to test all of this carefully in the test_study_details_documents.py test.
2020-04-29 15:08:11 -04:00
Dan Funk ee999a0f15 fixing a bunch of stupid mistakes because I am tried. 2020-04-20 20:28:12 -04:00
Dan Funk 697127660f Assure that all script tasks place data in a dictionary that is named exactly the same as the class - which is also the same as the Script tag. 2020-04-07 14:09:21 -04:00
Dan Funk c6b6ee5d70 Renamed the required_docs script to just "documents", and it returns all documented in the irb_documents look up table indexed on the "Code" - so details become available in the task data like "documents.IRB_INFOSEC_DOC.required".
Updated the irb_documents with shorter code names, thanks to Alex. Re-worked the DMN models so they can properly read from this new datastructure.
2020-04-06 16:56:00 -04:00
Dan Funk e2c408b70d Removed all self-referential calls in the study_api. One api endpoint should never call another api endpoint. Moved the logic for updating a study to the study Model, rather than checking and setting dictionary values which will become very hard to maintain.
The protocol builder service now returns real models, not dictionaries, forcing proper validation and fail-fast behavior.
Changed the name of the "status" spec, to "top_level_workflow" and removing any connection a workflow or study has with this specification.  It is only unused to determine status in real time, and is not reused or tracked.
Modified the required documents script to return a dictionary and not an array, making it easier to speak to specific values in the BPMN and DMN.
Working on new ways to test the top_level_workflow in the context of updates, this is still a work in progress.
Making use of several modifications to the Spiff library that enables more complex expressions in DMN models. This is evident in the new DMN models for the top_level_workflow
2020-03-26 12:51:53 -04:00