Adding checks in the API to assure the correct person is completeing a task based on the task's lane.
Adding lane to the Navigation item.
Adding a check to assure that unique user ids can be identified using task.data
Added some additional ldap entries to make testing and development easier.
Removed a big chunk of duplicate code from task_tests_api
Modified some of the helper functions to make it easier to test as specific users.
Added some additional bpmn models to the tests for testing lanes and roles.
* The Task.title returned to the front end will now attempt to process the "display_name" property for dot-notation syntax, making it possible to use this for multi-instance tasks,
but will work in all cases where we want he title to change based on values in the data model.
* Fixing a bug the test_study_api where it wasn't updated when we made recent changes to the different states of a study.
*
Criteria :
task.multi_instance_type == 'looping'
to terminate, use the standard endpoint for submitting form data with a query variable of terminate_loop=true
Will likely need two buttons:
"Submit and quit"
"Submit and add another"
or something similar
Also, when returning error messages, attempt to include the task data for the task that caused the error.
Also, when attempting to delete any file, respond with an API error explaining the issue, and log the details.
Another speed improvement - data in the FileDataModel is deferred, and not queried until it is specifically used, as the new data structures need to use this model frequently.
From an API point of view you can do the following (and only the following)
/files?workflow_spec_id=x
* You can find all files associated with a workflow_spec_id, and add a file with a workflow_spec_id
/files?workflow_id=x
* You can find all files associated with a workflow_id, and add a file that is directly associated with the workflow
/files?workflow_id=x&form_field_key=y
* You can find all files associated with a form element on a running workflow, and add a new file.
Note: you can add multiple files to the same form_field_key, IF they have different file names. If the same name, the original file is archived,
and the new file takes its place.
The study endpoints always return a list of the file metadata associated with the study. Removed /studies-files, but there is an
endpoint called
/studies/all - that returns all the studies in the system, and does include their files.
On a deeper level:
The File model no longer contains:
- study_id,
- task_id,
- form_field_key
Instead, if the file is associated with workflow - then that is the one way it is connected to the study, and we use this relationship to find files for a study.
A file is never associated with a task_id, as these change when the workflow is reloaded.
The form_field_key must match the irb_doc_code, so when requesting files for a form field, we just look up the irb_doc_code.
Refactored calls into a new lookup_service to keep things tidy.
New keys for all enum/auto-complete fields:
PROP_OPTIONS_FILE = "spreadsheet.name"
PROP_OPTIONS_VALUE_COLUMN = "spreadsheet.value.column"
PROP_OPTIONS_LABEL_COL = "spreadsheet.label.column"
PROP_LDAP_LOOKUP = "ldap.lookup"
FIELD_TYPE_AUTO_COMPLETE = "autocomplete"
No Previous Task, No Last Task, No Task List. Just the current task, and the Navigation.
Use the token endpoint to set the current task, even if it is a "READY" task in the api.
Previous Task can be set by identifying the prior task in the Navigation (I'm hoping)
Prefering camel case to snake case on all new apis. Maybe clean the rest up later.
running all extension/properties through the Jinja template processor so you can have custom display names using data, very helpful for building multi-instance displays.
Properties was returned as an array of key/value pairs, which is just mean. Switched this to a dictionary.
Improving the study_info script documentation to provide detailed examples of values returned based on arguments.
Making the tests a little more targetted and less subject to breaking through better mocks.
Allow all tests to pass even when ther protocol builder mock isn't running locally.
Removing the duplication of reference files in tests and static, as this seems silly to me at the moment.
Because this changes the endpoint for all existing document details, I've modified all the test and static bpmn files to use the new format.
Shorting up the SponsorsList.xls file makes for slightly faster tests. seems senseless to load 5000 everytime we reset the data.
Tried to test all of this carefully in the test_study_details_documents.py test.
Because the name field is now used to expose workflow/sub-process information on tasks, we can't use it to store the workflow_version, so that is now just stored on the database model. Which is much cleaner and removes a duplication.
INCOMPLETE = 'Incomplete in Protocol Builder',
ACTIVE = 'Active / Ready to roll',
HOLD = 'On Hold',
OPEN = 'Open - this study is in progress',
ABANDONED = 'Abandoned, it got deleted in Protocol Builder'
Moving the primary process id from the workflow model to the file model, and assuring it is updated properly. This was causing a bug that would "lose" the workflow.
Found a problem where the documentation for elements was being processed BEFORE data was loaded from a script. There still may be some issues here.
Ran into an issue with circular dependencies - handling it with a new workflow_service, and pulling computational logic out of the api_models - it was the right thing to do.
some ugly fixes in the file_service for improving panda output from spreadsheet processing that I need to revist.
now that the spiff-workflow handles multi-instance, we can't have random multi-instance tasks around.
Improved tests around study deletion.