Spiffworkflow 1.2: Top Level Imports moved to appropriate modules
- replace 'from SpiffWorkflow import WorkflowException' to 'from SpiffWorkflow.exceptions import WorkflowException'
- replace 'from SpiffWorkflow import TaskState' to 'from SpiffWorkflow.task import TaskState'
- replace 'from SpiffWorkflow import Task' to 'from SpiffWorkflow.task import Task'
SpiffWorkflow 1.2: Navigation code removed completely. Proved to be of little use to folks, was super complex and difficult to maintain.
SpiffWorkflow 1.2: When inserting custom functions into the PythonExecutionEngine - be aware that the task data will act as the full context for execution, and will contain global functions and methods during the exec call.
SpiffWorkflow 1.2: All Task Specs now have a spec_type attribute, containing a descriptive string of the type, such as "User Task", "Script Task", "Start Event" etc...
* removed all the performance metric code into a separate function.
* restructured the code so it is either creating a new workflow, or deserializing an old one.
* Added code to upgrade serialized objects from 1.0 to 1.1
* Using the new method of creating a bpmn_workflow object:
```python
parser = self.get_spec_parser(self.spec_files, spec_info)
top_level = parser.get_spec(spec_info.primary_process_id)
subprocesses = parser.get_process_specs()
self.bpmn_workflow = BpmnWorkflow(top_level, subprocesses, script_engine=self._script_engine)
```
Fixed a few minor bugs that stood out while testing
1. when updating a workflow, we should check for a valid task BEFORE calling cancel_notify, which requires a valid task.
2. get_localtime - quick fix on the date parser - for python 3.9.
3. the start_workflow script would error out in a way that made it unclear which workflow was having the problem. Fixed the error.
Also:
* Assured that arguments are consistent (we always seem to use workflow_spec_id, so I made sure we use that consistently.
* Don't require named parameters - so it's cool to call it like: reset_workflow('my_workflow_id')
* Task Actions (ie create, assign, etc...) are now an enumeration in the models, and not static variables on Workflow Service, so we can reference them consistently from anywhere.
* Removed some repetitive code
* Always try to validate as much as possible in the scripts to save folks time debugging.
*
1. Avoid ever re-generating the list of scripts that can be used in a script task. Terribly expensive as we call eval constantly, and it never ever changes once the app starts. (see script.py changes, and comments)
2. Cache the DocumentStatus list in the flask session, so we calculate it at most once per API Call. It's at least .25 seconds per call. (see study_sevice)
3. We called UserFileService.get_files_for_study (which runs a db query EVERY time) for every possible document type. Now we run the query once (study service line 321)
4. When returning a workflow, we looped through every single task in that workflow's navigation, and called the expensive spiff_task_to_api_task just to figure out it's proper display name. We run a much faster and more efficient method to calculate the display name naow (see workflow_service on lie 680, and 799)
5. A hellton of @timeit and sincetime() calls, that I want to leave in, to help debug any slowness on production.
2. Added two api endpoints that will allow us to get all the logs for a specific workflow or study.
3. Assured that requests for logs are paginated, sortable, and can be limited to a specific code if needed.
4. Assure that we only use logging levels that match the log levels of Python.