Commit Graph

266 Commits

Author SHA1 Message Date
Kelly McDonald 479f6d9647 STG-26
Do rename per conversation, continue to look for ways to implement looping in a way that is re-entrant
2020-06-18 12:01:02 -04:00
Carlos Lopez 4db815a999 Handling incoming values from processor 2020-06-17 21:11:47 -06:00
Carlos Lopez e947f40ec7 Merge branch 'rrt/dev' into feature/emails-enhancement 2020-06-17 20:10:11 -06:00
Carlos Lopez 896ba6b377 Email relies now on markdown content 2020-06-17 17:00:16 -06:00
Carlos Lopez 2ce2dc73b5 Rendering proper content & organizing file structure for email tests 2020-06-17 16:09:38 -06:00
Kelly McDonald 1844c93919 STG-26 - basic test case for a looping task
Criteria :
task.multi_instance_type == 'looping'

to terminate, use the standard endpoint for submitting form data with a query variable of terminate_loop=true

Will likely need two buttons:
"Submit and quit"
"Submit and add another"
or something similar
2020-06-17 11:35:06 -04:00
Carlos Lopez d4a285883f Email script 2020-06-16 18:42:36 -06:00
Kelly McDonald 0d3105fe7e Make changes to the workflow names so that they are placed in the correct order - I made workflows go in order of the name rather than in the order they appear in the XML to allow more control over the way the nav list is displayed. 2020-06-15 12:32:19 -04:00
Dan Funk 1f0e8741ba Run the validation twice, once completing all of the data, and a second time, completing only the required fields.
Also, add a helper method to reduce boiler plate code around Workflow Exceptions.
2020-05-30 17:26:27 -04:00
Dan Funk 860c475b29 Fill out repeating sections during validation process.
Also, when returning error messages, attempt to include the task data for the task that caused the error.
Also, when attempting to delete any file, respond with an API error explaining the issue, and log the details.
2020-05-30 15:37:04 -04:00
Dan Funk 11413838a7 Faster lookup fields. We were parsing the spec each time to get details about how to search. We're just grabbing the workflow id and task id now and building that straight into the full text search index for faster lookups. Should be peppy.
Another speed improvement - data in the FileDataModel is deferred, and not queried until it is specifically used, as the new data structures need to use this model frequently.
2020-05-29 01:39:39 -04:00
Dan Funk cd7f67ab48 A major refactor of how we search and store files, as there was a lot of confusing bits in here.
From an API point of view you can do the following (and only the following)

/files?workflow_spec_id=x
* You can find all files associated with a workflow_spec_id, and add a file with a workflow_spec_id
/files?workflow_id=x
* You can find all files associated with a workflow_id, and add a file that is directly associated with the workflow
/files?workflow_id=x&form_field_key=y
* You can find all files associated with a form element on a running workflow, and add a new file.
   Note: you can add multiple files to the same form_field_key, IF they have different file names. If the same name, the original file is archived,
   and the new file takes its place.

The study endpoints always return a list of the file metadata associated with the study.  Removed /studies-files, but there is an
endpoint called

/studies/all  - that returns all the studies in the system, and does include their files.

On a deeper level:
 The File model no longer contains:
  - study_id,
  - task_id,
  - form_field_key

Instead, if the file is associated with workflow - then that is the one way it is connected to the study, and we use this relationship to find files for a study.
A file is never associated with a task_id, as these change when the workflow is reloaded.
The form_field_key must match the irb_doc_code, so when requesting files for a form field, we just look up the irb_doc_code.
2020-05-28 08:27:26 -04:00
Dan Funk 77f72e408f Lookup Service now raises exact matches to the top. Very hackish, but it works. 2020-05-27 14:36:10 -04:00
Dan Funk d5e075db82 Order search results by relevancy in the lookup service. 2020-05-27 09:47:44 -04:00
Dan Funk d1606ffb1a forgot to include the new empty master workflow, which allows the tests to all pass. 2020-05-22 15:31:38 -04:00
Dan Funk 951710d762 ldap lookup.
Refactored calls into a new lookup_service to keep things tidy.

New keys for all enum/auto-complete fields:
    PROP_OPTIONS_FILE = "spreadsheet.name"
    PROP_OPTIONS_VALUE_COLUMN = "spreadsheet.value.column"
    PROP_OPTIONS_LABEL_COL = "spreadsheet.label.column"
    PROP_LDAP_LOOKUP = "ldap.lookup"
    FIELD_TYPE_AUTO_COMPLETE = "autocomplete"
2020-05-19 16:11:43 -04:00
Dan Funk 7a380fdeb4 Forgot to fix another test and to add the example file used for a previous test. 2020-05-16 15:38:15 -04:00
Dan Funk 53255ef35e massive overhaul of the Workflow API endpoint.
No Previous Task, No Last Task, No Task List.  Just the current task, and the Navigation.
Use the token endpoint to set the current task, even if it is a "READY" task in the api.
Previous Task can be set by identifying the prior task in the Navigation (I'm hoping)
Prefering camel case to snake case on all new apis.  Maybe clean the rest up later.
2020-05-15 15:54:53 -04:00
Dan Funk 6d4348d644 Fixing some failing tests. Moved the task properties into a dictionary, but moving the form field properties to a dictionary will be a larger effort that we don't want to get into on either the back or front end right this moment. 2020-05-14 14:39:14 -04:00
Dan Funk 55a1850e7c adding a navigation component to the Workflow Model.
running all extension/properties through the Jinja template processor so you can have custom display names using data, very helpful for building multi-instance displays.
Properties was returned as an array of key/value pairs, which is just mean.  Switched this to a dictionary.
2020-05-14 13:43:23 -04:00
Dan Funk da7cae51b8 Adding a new reference file that provides greater details about the investigators related to a study.
Improving the study_info script documentation to provide detailed examples of values returned based on arguments.
Making the tests a little more targetted and less subject to breaking through better mocks.
Allow all tests to pass even when ther protocol builder mock isn't running locally.
Removing the duplication of reference files in tests and static, as this seems silly to me at the moment.
2020-05-07 13:57:24 -04:00
Dan Funk 1571986c0e I had to give up and live with the idea that we can only render documentation on the current task, not on the previous or next tasks. I think this is ok. If you want to view a task, you need to make it the active task to assure all the parts and pieces are in place. 2020-05-06 13:01:38 -04:00
Dan Funk 9629b36e92 Setting JSON_SORT_KEYS to false, assuring that Flask does not resort all data returned to the front end.
Updating Spiff Workflow which has some critical behavioral changes around MultiInstance.
2020-05-06 10:59:49 -04:00
Dan Funk f1f8b91c9c Refactor the document details scripts. Now there is one script, it returns data in a consistent format, and has all the details required. The script is located in StudyInfo, with the argument documents. Make note that it returns a dictionary of ALL the documents, with a field to mark which ones are required according to the protocol builder. Others may become required if a workflow determines such, in which case the workflow will enforce this, and the document will have a count > 0, and additional details in a list of files within the document. I modified the XLS file to use lower case variable names, because it disturbed me, and we have to reference them frequently. Removed devious "as_object" variable on get_required_docs, so it behaves like the other methods all the time, and returns a dictionary. All the core business logic for finding the documents list now resides in the StudyService.
Because this changes the endpoint for all existing document details, I've modified all the test and static bpmn files to use the new format.
Shorting up the SponsorsList.xls file makes for slightly faster tests. seems senseless to load 5000 everytime we reset the data.
Tried to test all of this carefully in the test_study_details_documents.py test.
2020-04-29 15:08:11 -04:00
Dan Funk 0088364b1d Merge branch 'master' of github.com:sartography/cr-connect-workflow 2020-04-28 13:48:52 -04:00
Dan Funk 3e3a249e3c Verifying Sub-Process works, and adding a field to expose a hint as to the sub-process in which the task occurs.
Because the name field is now used to expose workflow/sub-process information on tasks, we can't use it to store the workflow_version, so that is now just stored on the database model.  Which is much cleaner and removes a duplication.
2020-04-28 13:48:44 -04:00
Aaron Louie 496e5b7719 Updates all workflow specs to match staging 2020-04-27 22:54:05 -04:00
Dan Funk 12eb039bc9 Server isn't erroring out, but can't find the lookup table id in the database, so trying to use the in-memory model instead, to give things time to get to the database. Really unsure what is happening here. Hard to see in the database. 2020-04-24 07:01:32 -04:00
Dan Funk 286af86f08 Forgot a missing bpmn file for running the tests. 2020-04-22 19:41:40 -04:00
Dan Funk 7b085c9c9d Adding an API Endpoint that will return a list of LookupValues that match a given query - can be used to populate an auto-complete table. 2020-04-22 19:40:40 -04:00
Dan Funk fd0adb1d43 Updated the study status to use a different enumeration. Migration correctly handles modifying the enum.
INCOMPLETE = 'Incomplete in Protocol Builder',
  ACTIVE = 'Active / Ready to roll',
  HOLD = 'On Hold',
  OPEN = 'Open - this study is in progress',
  ABANDONED = 'Abandoned, it got deleted in Protocol Builder'
2020-04-21 17:13:30 -04:00
Dan Funk 0a74bf8c44 We can now collect, and provide "extension properties" on a task as set in the camunda modeler.
These are provided as "properties" on a task, and are identical in structure to properties on a form field.
2020-04-21 12:07:59 -04:00
Dan Funk ee999a0f15 fixing a bunch of stupid mistakes because I am tried. 2020-04-20 20:28:12 -04:00
Dan Funk 59d400b058 Assure DMN can pick up name rather than label outputs. 2020-04-20 19:26:42 -04:00
Dan Funk 2d3402a719 Ldap Service with Test and mocks.
LDAP_URL can be set in an environment variable.
2020-04-20 15:16:33 -04:00
Dan Funk 815f40a539 forgot a critical file. 2020-04-17 15:18:03 -04:00
Dan Funk 241980f98f If you name add a file to a workflow that has the exact same name as a Task Spec's ID, and an extension of "md", it wll use that file as the markdown content, and ignore the markdown in the documentation on the task spec.
Moving the primary process id from the workflow model to the file model, and assuring it is updated properly.  This was causing a bug that would "lose" the workflow.
2020-04-17 13:30:32 -04:00
Dan Funk 3d820fcb21 Inital work on manual instance. 2020-04-16 11:07:29 -04:00
Dan Funk dc2895cb05 Allow configurators to upload xls files into a workflow for defining enumrations of values for dropdown lists in forms. Fixing lots of tests.
Found a problem where the documentation for elements was being processed BEFORE data was loaded from a script.  There still may be some issues here.

Ran into an issue with circular dependencies - handling it with a new workflow_service, and pulling computational logic out of the api_models - it was the right thing to do.
2020-04-15 11:13:32 -04:00
Dan Funk c79415a794 throw a sensible error when study is not found on get_study (don't 500)
some ugly fixes in the file_service for improving panda output from spreadsheet processing that I need to revist.
now that the spiff-workflow handles multi-instance, we can't have random multi-instance tasks around.
Improved tests around study deletion.
2020-04-08 13:28:43 -04:00
Dan Funk 697127660f Assure that all script tasks place data in a dictionary that is named exactly the same as the class - which is also the same as the Script tag. 2020-04-07 14:09:21 -04:00
Dan Funk c6b6ee5d70 Renamed the required_docs script to just "documents", and it returns all documented in the irb_documents look up table indexed on the "Code" - so details become available in the task data like "documents.IRB_INFOSEC_DOC.required".
Updated the irb_documents with shorter code names, thanks to Alex. Re-worked the DMN models so they can properly read from this new datastructure.
2020-04-06 16:56:00 -04:00
Dan Funk 60a10bb688 Marshmallow isn't the right tool when dealing with large models with lots of null values. Rather than fight the process of mamaging the Study Details, I'm letting that fall through, and we can test on an individual value or maybe set up a constants array when that becomes meaningful. 2020-04-03 16:24:38 -04:00
Dan Funk c7d2c28178 Vastly more informative ApiError model that provides details on the underlying task where the error occured.
Added a validate_workflow_specification endpoint that allows you to check if the workflow will execute from beginning to end using random data.
Minor fixes to existing bpmns to allow them to pass.
All scripts must include a "do_task_validate_only" that restricts external calls and database modifications, but performs as much logic as possible.
2020-03-27 08:29:31 -04:00
Dan Funk e2c408b70d Removed all self-referential calls in the study_api. One api endpoint should never call another api endpoint. Moved the logic for updating a study to the study Model, rather than checking and setting dictionary values which will become very hard to maintain.
The protocol builder service now returns real models, not dictionaries, forcing proper validation and fail-fast behavior.
Changed the name of the "status" spec, to "top_level_workflow" and removing any connection a workflow or study has with this specification.  It is only unused to determine status in real time, and is not reused or tracked.
Modified the required documents script to return a dictionary and not an array, making it easier to speak to specific values in the BPMN and DMN.
Working on new ways to test the top_level_workflow in the context of updates, this is still a work in progress.
Making use of several modifications to the Spiff library that enables more complex expressions in DMN models. This is evident in the new DMN models for the top_level_workflow
2020-03-26 12:51:53 -04:00
Dan Funk 16c6ba9661 Removing unneeded files. 2020-03-20 08:33:37 -04:00
Dan Funk 6e3b6c2635 Assure that files uploaded through web forms and files generated from templates can be cross-referenced to known document requirements from the protocol builder. Configurators can control this by managing an XLS Spreadsheet called "irb_documents.xslx".
Required Documents is becoming complicated, so making this it's own script task, removing it from study_info.py
The file_service is now very aware of this irb_documents file, so it will always need to exist.  We seed this file
during setup, but it can be overwritten by the configurator.
2020-03-19 17:14:20 -04:00
Dan Funk 560b8a8782 Mergers details from the irb_documents.xlsx into the values returned from the Protocol Builder to create a more complete picture of required document details. 2020-03-19 10:23:50 -04:00
Dan Funk 83d859fd3a Just merging stuff real quick. 2020-03-18 17:03:36 -04:00
Dan Funk 02be8ede75 Merge remote-tracking branch 'origin/master' into feature/reference_files 2020-03-18 15:16:34 -04:00
Aaron Louie 0cc98616fd Merge branch 'master' into feature/workflow_spec_categories 2020-03-16 10:25:03 -04:00
Aaron Louie bdd07685c6 Adds status spec when adding a study, and adds/removes workflows from study based on output data from status spec. 2020-03-15 15:54:13 -04:00
Dan Funk 779674ab60 Add the ability to upload and request general reference files by name. These will be used across workflows and will frequently contain lookup tables that can be referenced by various script tasks. 2020-03-13 15:03:57 -04:00
Aaron Louie b1a6c9b6c7 Adds basic status-setting workflow spec and minimal test 2020-03-13 14:58:07 -04:00
Dan Funk 05b39df745 Fixes #12: Catching some specific common errors and re-raising as APIErrors with detailed codes and descriptions to improve debugging. In doing so, improving the error handling in the event a soft-reset causes an immediate error - and resetting to the original version of the specification in these events, to allow users the chance to try a hard reset instead. 2020-03-11 16:33:18 -04:00
Dan Funk 9fcd6f38f4 Merge remote-tracking branch 'origin/master' into feature/pb_services 2020-03-05 17:13:41 -05:00
Dan Funk 70611e2c1d Adding the version of the specification used to create a workflow to the workflow api endpoint. Though the exact content of this version is likely to change.
Split the API specific models out from the workflow models to help me keep this straight.
Added tests to help me understand the errors thrown the and resolution path when a workflow specification changes in the midst of a running workflow.
2020-03-05 11:18:20 -05:00
Dan Funk d184ccc8de
Merge pull request #16 from sartography/feature/pb_services
Feature/pb services
2020-03-03 15:38:38 -05:00
Dan Funk 7194d7d374 Standardizing the script tasks that can be executed on the server, adding tons of error messages for when things go wrong. All scripts must exist in side of the crc/scripts directory.
Adding a new script that script tasks can use to add in data about the study.

Moving all the test workflow specifications out of the main load.

fixing a pile of tests so they can find workflow specs that are now moved into the test directory.
2020-03-03 13:52:45 -05:00
Aaron Louie b965276310 Adds a mock study with same ID a one from data loader. 2020-03-02 15:01:41 -05:00
Aaron Louie 305118e90e Adds a test for get_studies endpoint 2020-03-02 14:42:30 -05:00
Dan Funk 5e3fdaaa94 New set of "Tools" api endpoints, that provides a way to quickly render markdown or word documents by uploading json data and a template to populate.
Improved Error messages / Error processing.  You can now just throw an APIError anywhere, and it will be properly serialized and returned.
2020-02-29 17:22:38 -05:00
Dan Funk 2cc6010c8d Protocol builder connections 2020-02-20 13:30:04 -05:00
Dan Funk 879a248002 Adding a test to assure the file creation occurs as expected via the API. 2020-02-10 20:54:22 -05:00
Dan Funk 1d24ebe382 Provide a script for generating word documents from template files. Refractored file managment into a service to make it easier to programatically add files. Modified the workflow_processor to inject the study_id and workflow_id into the running workflow so that this meta-information is avialable at the task level. 2020-02-10 16:19:23 -05:00
Dan Funk ec4df2b3fa Cleaning up the tests and making it easier to test workflows without adding them to the example data structure. 2020-02-04 16:49:28 -05:00