* removed all the performance metric code into a separate function.
* restructured the code so it is either creating a new workflow, or deserializing an old one.
* Added code to upgrade serialized objects from 1.0 to 1.1
* Using the new method of creating a bpmn_workflow object:
```python
parser = self.get_spec_parser(self.spec_files, spec_info)
top_level = parser.get_spec(spec_info.primary_process_id)
subprocesses = parser.get_process_specs()
self.bpmn_workflow = BpmnWorkflow(top_level, subprocesses, script_engine=self._script_engine)
```
Fixed a few minor bugs that stood out while testing
1. when updating a workflow, we should check for a valid task BEFORE calling cancel_notify, which requires a valid task.
2. get_localtime - quick fix on the date parser - for python 3.9.
3. the start_workflow script would error out in a way that made it unclear which workflow was having the problem. Fixed the error.
Also:
* Assured that arguments are consistent (we always seem to use workflow_spec_id, so I made sure we use that consistently.
* Don't require named parameters - so it's cool to call it like: reset_workflow('my_workflow_id')
* Task Actions (ie create, assign, etc...) are now an enumeration in the models, and not static variables on Workflow Service, so we can reference them consistently from anywhere.
* Removed some repetitive code
* Always try to validate as much as possible in the scripts to save folks time debugging.
*
Removing the spreadsheet.value.column and data.value.column so we just have value.column for both.
Improving the __str__ function in the ApiError class, to make debugging a little easier.
Adding a "validate_all" flask command, to help us track down any issues with current workflows in production (use this in concert with sync_with_testing)
Fixed logs of tests.
removed fact_runner.py, a very early and crufty bit of code.
To do so, I changed the WorkflowProcessor's reset method to be static, it will try to instantiate the current workflow and execute a cancel_notify event, but if that fails, it will continue and always return a new workflow processor using the latest spec.
In api.study.update_study we test the study status and call the new WorkflowService method process_workflows_for_cancels.
In services.workflow_service we added the new method process_workflows_for_cancels. It loops through workflows for a study, and resets them if they are in progress.
In services.workflow_processor, we changed the reset method to be an instance method so we can call self.cancel_notify.
In tests.test_lookup_service we changed the call to WorkflowProcessor.reset to reflect the change from class method to instance method
Adds an optional 'id' parameter to the lookup endpoint so you can find a specific entry in the lookup table.
Makes sure the data attribute returned on a lookup model is a dictionary, and not a string.
Fixes a previous bug that would crop up in double spaces were used when performing a search.
Another speed improvement - data in the FileDataModel is deferred, and not queried until it is specifically used, as the new data structures need to use this model frequently.
Refactored calls into a new lookup_service to keep things tidy.
New keys for all enum/auto-complete fields:
PROP_OPTIONS_FILE = "spreadsheet.name"
PROP_OPTIONS_VALUE_COLUMN = "spreadsheet.value.column"
PROP_OPTIONS_LABEL_COL = "spreadsheet.label.column"
PROP_LDAP_LOOKUP = "ldap.lookup"
FIELD_TYPE_AUTO_COMPLETE = "autocomplete"