Record the size of a file in the database for quick access (this helps with a frontend refactor, so it isn't downloading the file just to see it's size)
Cleaning up the timing/performance metric reporting to make it easier to read.
Fixing a bug that prevented non-admins for getting the document-directory
To do so, I changed the WorkflowProcessor's reset method to be static, it will try to instantiate the current workflow and execute a cancel_notify event, but if that fails, it will continue and always return a new workflow processor using the latest spec.
It is not complete, but is in a state where we can start to interact with the front end.
Two tests are failing.
Committing so I can work on an error for Alex.
In api.study.update_study we test the study status and call the new WorkflowService method process_workflows_for_cancels.
In services.workflow_service we added the new method process_workflows_for_cancels. It loops through workflows for a study, and resets them if they are in progress.
In services.workflow_processor, we changed the reset method to be an instance method so we can call self.cancel_notify.
In tests.test_lookup_service we changed the call to WorkflowProcessor.reset to reflect the change from class method to instance method
a hard or soft reset, and an error is thrown trying to fulfil a cancel event, the reset should still fire.
Sending emails still had a number of issues correctly parsing it's arguments. This is corrected.
Making all the lookup field names consistent.
Fixing the lookup service which was failing at times trying to find the correct field to use for building the lookup table.
Updating validation to check for additional fields and properties.
When connexion level errors occur, wrapping it in an API Error to be consistent.
Still having issues where we try to eval an empty definition, not quite sure why there is a difference from what we had before. I may need to revert some of it and determine what is going on.
This is the first waypoint on a larger effort to make all of the 'special scripts' that currently require a shebang to be just another python function.
I've also made it so that it falls back if we accidentally forget to add a shebang to a study as this would be a breaking change.
With the fallback feature, it should work with unmodified bpmn documents.
* TaskEvents now contain the data for each event as it was when the task was completed.
* When loading a task for the front end, if the task was completed previously, we take that data, and overwrite it with the lastest data, allowing users to see previously entered values.
* Pulling in the Admin branch, as there are changes in that branch that are critical to seeing what is happening when we do this thing.
* Moved code for converting a workflow to an API ready data stricture into the Workflow service where it belongs, and out of the API.
* Hard resets just convert to using the latest spec, they don't try to keep the data from the last task. There is a better way.
* Moving to a previous task does not attept to keep the data from the last completed task.
* Added a function that will fix all the existing RRT data by adding critical data into the TaskEvent model. This can be called with from the flask command line tool.
I noticed the validation sometimes looks ahead for files, so looking at all the tasks now, not just the ready tasks for the lookup field.
Ran into an issue with validation where a workflow model was required, so I create one and delete it. Another refactor for another day.
Another speed improvement - data in the FileDataModel is deferred, and not queried until it is specifically used, as the new data structures need to use this model frequently.