No Previous Task, No Last Task, No Task List. Just the current task, and the Navigation.
Use the token endpoint to set the current task, even if it is a "READY" task in the api.
Previous Task can be set by identifying the prior task in the Navigation (I'm hoping)
Prefering camel case to snake case on all new apis. Maybe clean the rest up later.
running all extension/properties through the Jinja template processor so you can have custom display names using data, very helpful for building multi-instance displays.
Properties was returned as an array of key/value pairs, which is just mean. Switched this to a dictionary.
Improving the study_info script documentation to provide detailed examples of values returned based on arguments.
Making the tests a little more targetted and less subject to breaking through better mocks.
Allow all tests to pass even when ther protocol builder mock isn't running locally.
Removing the duplication of reference files in tests and static, as this seems silly to me at the moment.
I noticed we were saving the workflow every time we loaded it up, rather than only when we were making changes to it. Refactored this to be a little more careful.
Centralized the saving of the workflow into one location in the processor, so we can make sure we update all the details about that workflow every time we save.
The workflow service has a method that will log any task action taken in a consistent way.
The stats models were removed from the API completely. Will wait for a use case for dealing with this later.
Because the name field is now used to expose workflow/sub-process information on tasks, we can't use it to store the workflow_version, so that is now just stored on the database model. Which is much cleaner and removes a duplication.
INCOMPLETE = 'Incomplete in Protocol Builder',
ACTIVE = 'Active / Ready to roll',
HOLD = 'On Hold',
OPEN = 'Open - this study is in progress',
ABANDONED = 'Abandoned, it got deleted in Protocol Builder'
Moving the primary process id from the workflow model to the file model, and assuring it is updated properly. This was causing a bug that would "lose" the workflow.
Found a problem where the documentation for elements was being processed BEFORE data was loaded from a script. There still may be some issues here.
Ran into an issue with circular dependencies - handling it with a new workflow_service, and pulling computational logic out of the api_models - it was the right thing to do.
Adding id and spec_version to the workflow metadata.
Refactoring the processing of the master_spec so that it doesn't polute the workflow database.
Adding tests to assure that the status and counts are updated on the workflow model as users make progress.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
Fixing adding a study so all workflows are again added, will add status on those workflows based on output from the master bpmn diagram, which is coming shortly.
The protocol builder service now returns real models, not dictionaries, forcing proper validation and fail-fast behavior.
Changed the name of the "status" spec, to "top_level_workflow" and removing any connection a workflow or study has with this specification. It is only unused to determine status in real time, and is not reused or tracked.
Modified the required documents script to return a dictionary and not an array, making it easier to speak to specific values in the BPMN and DMN.
Working on new ways to test the top_level_workflow in the context of updates, this is still a work in progress.
Making use of several modifications to the Spiff library that enables more complex expressions in DMN models. This is evident in the new DMN models for the top_level_workflow
Required Documents is becoming complicated, so making this it's own script task, removing it from study_info.py
The file_service is now very aware of this irb_documents file, so it will always need to exist. We seed this file
during setup, but it can be overwritten by the configurator.