Spiffworkflow 1.2: Top Level Imports moved to appropriate modules
- replace 'from SpiffWorkflow import WorkflowException' to 'from SpiffWorkflow.exceptions import WorkflowException'
- replace 'from SpiffWorkflow import TaskState' to 'from SpiffWorkflow.task import TaskState'
- replace 'from SpiffWorkflow import Task' to 'from SpiffWorkflow.task import Task'
SpiffWorkflow 1.2: Navigation code removed completely. Proved to be of little use to folks, was super complex and difficult to maintain.
SpiffWorkflow 1.2: When inserting custom functions into the PythonExecutionEngine - be aware that the task data will act as the full context for execution, and will contain global functions and methods during the exec call.
SpiffWorkflow 1.2: All Task Specs now have a spec_type attribute, containing a descriptive string of the type, such as "User Task", "Script Task", "Start Event" etc...
Spiffworkflow 1.2: remove all references of timeit (no longer in SpiffWorkflow)
Spiffworkflow 1.2: pythonScriptEngine._evaluate no longer accepts a task argument.
Spiffworkflow 1.2: CancelEventDefinition was removed - please use SignalEventDefinition instead
EX: replace bpmn_workflow.signal('cancel') # generate a cancel signal.
bpmn_workflow.catch(CancelEventDefinition())
WITH: bpmn_workflow.catch(SignalEventDefinition('cancel'))
Spiffworkflow 1.2: Task States are JUST integers and TaskSpecNames is now a public dictionary, and can be used to covert a state to human readable string
EX: REPLACE: user_task.state.name
WITH: TaskStateNames[user_task.state]
But, it returned warnings to the front end for debugging
Ticket 733 was created to address this.
Fix the warnings and return them again.
This also broke the tests in test_study_status_message, so we skip them for now
Merging in code to improve performance of calculating percent complete for a study.
Assureing we have a primary investigator for the front page (another merge)
Also: All script tasks should raise WorkflowTaskExecExceptions - NOT APIExceptions - this is because our scripts are executed by Spiff (not the other way around) so the errors need to pass fluidly through spiff, and come back to use THEN we can convert them to APIErrors. Otherwise we lose all kinds of good information about the error.
1. Avoid ever re-generating the list of scripts that can be used in a script task. Terribly expensive as we call eval constantly, and it never ever changes once the app starts. (see script.py changes, and comments)
2. Cache the DocumentStatus list in the flask session, so we calculate it at most once per API Call. It's at least .25 seconds per call. (see study_sevice)
3. We called UserFileService.get_files_for_study (which runs a db query EVERY time) for every possible document type. Now we run the query once (study service line 321)
4. When returning a workflow, we looped through every single task in that workflow's navigation, and called the expensive spiff_task_to_api_task just to figure out it's proper display name. We run a much faster and more efficient method to calculate the display name naow (see workflow_service on lie 680, and 799)
5. A hellton of @timeit and sincetime() calls, that I want to leave in, to help debug any slowness on production.