Spiffworkflow 1.2: Top Level Imports moved to appropriate modules
- replace 'from SpiffWorkflow import WorkflowException' to 'from SpiffWorkflow.exceptions import WorkflowException'
- replace 'from SpiffWorkflow import TaskState' to 'from SpiffWorkflow.task import TaskState'
- replace 'from SpiffWorkflow import Task' to 'from SpiffWorkflow.task import Task'
SpiffWorkflow 1.2: Navigation code removed completely. Proved to be of little use to folks, was super complex and difficult to maintain.
SpiffWorkflow 1.2: When inserting custom functions into the PythonExecutionEngine - be aware that the task data will act as the full context for execution, and will contain global functions and methods during the exec call.
SpiffWorkflow 1.2: All Task Specs now have a spec_type attribute, containing a descriptive string of the type, such as "User Task", "Script Task", "Start Event" etc...
Spiffworkflow 1.2: remove all references of timeit (no longer in SpiffWorkflow)
Spiffworkflow 1.2: pythonScriptEngine._evaluate no longer accepts a task argument.
Spiffworkflow 1.2: CancelEventDefinition was removed - please use SignalEventDefinition instead
EX: replace bpmn_workflow.signal('cancel') # generate a cancel signal.
bpmn_workflow.catch(CancelEventDefinition())
WITH: bpmn_workflow.catch(SignalEventDefinition('cancel'))
Spiffworkflow 1.2: Task States are JUST integers and TaskSpecNames is now a public dictionary, and can be used to covert a state to human readable string
EX: REPLACE: user_task.state.name
WITH: TaskStateNames[user_task.state]
Upraded SpiffWorkflow and now use th new get_subprocess_specs
updated the calculate_stats in the workflow processor - as the serialization had changed drastically, and needed to debug some performance issues.
Added a get_navigation method that will calcuate a basic navigation list MUCH faster than using the get_flat_nav_list in Spiffworkflows Navigation object.
Modified a hellton of tests because we don't have total_task and completed_task counts, or a complex nested navigation list anymore.
* removed all the performance metric code into a separate function.
* restructured the code so it is either creating a new workflow, or deserializing an old one.
* Added code to upgrade serialized objects from 1.0 to 1.1
* Using the new method of creating a bpmn_workflow object:
```python
parser = self.get_spec_parser(self.spec_files, spec_info)
top_level = parser.get_spec(spec_info.primary_process_id)
subprocesses = parser.get_process_specs()
self.bpmn_workflow = BpmnWorkflow(top_level, subprocesses, script_engine=self._script_engine)
```
Fixed a few minor bugs that stood out while testing
1. when updating a workflow, we should check for a valid task BEFORE calling cancel_notify, which requires a valid task.
2. get_localtime - quick fix on the date parser - for python 3.9.
3. the start_workflow script would error out in a way that made it unclear which workflow was having the problem. Fixed the error.
Also:
* Assured that arguments are consistent (we always seem to use workflow_spec_id, so I made sure we use that consistently.
* Don't require named parameters - so it's cool to call it like: reset_workflow('my_workflow_id')
* Task Actions (ie create, assign, etc...) are now an enumeration in the models, and not static variables on Workflow Service, so we can reference them consistently from anywhere.
* Removed some repetitive code
* Always try to validate as much as possible in the scripts to save folks time debugging.
*
This cleans up the _evaluate method which previously accepted arbitrary args and kwargs, and now requires an expression, a context to which exectute it, and, optionally, the current task being executed if the DMN is being executed as a part of a BusinessRuleTask in a BPMN diagram.
This also cleans up several bits of duplicated code.
There is also a bit of code here to assure that the current user is included when running the master workflow.
1. Avoid ever re-generating the list of scripts that can be used in a script task. Terribly expensive as we call eval constantly, and it never ever changes once the app starts. (see script.py changes, and comments)
2. Cache the DocumentStatus list in the flask session, so we calculate it at most once per API Call. It's at least .25 seconds per call. (see study_sevice)
3. We called UserFileService.get_files_for_study (which runs a db query EVERY time) for every possible document type. Now we run the query once (study service line 321)
4. When returning a workflow, we looped through every single task in that workflow's navigation, and called the expensive spiff_task_to_api_task just to figure out it's proper display name. We run a much faster and more efficient method to calculate the display name naow (see workflow_service on lie 680, and 799)
5. A hellton of @timeit and sincetime() calls, that I want to leave in, to help debug any slowness on production.
it can be kind of irritating for this stuff to be spinning up when you are trying to debug something, so just set
PROCESS_WAITING_TASKS to false in instance/config.py and voila!!
Only load the spec data files if you are creating a new workflow, otherwise just deserialize the json.
Removed the stuff about calculaing the version of the spec, as we don't use it.
Assure we raise more thoughtful error messages when running getting exceptions in engine tasks.
Field Options should always be available now due to a fix in Spiffworkflow.