Squashed 'SpiffWorkflow/' changes from 73886584b..01a25fc3f

01a25fc3f Merge pull request #333 from sartography/feature/ruff
99c7bd0c7 ruff linting fixes
56d170ba1 Cleaning up badges in the readme.
51c13be93 tweaking action, adding button
96275ad7c Adding a github action to run tests
c6c40976a minor fix to please sonarcloud.
03316babb Merge pull request #332 from sartography/updates-for-2.0-release
ab70a34b5 Release Notes for 2.0.0_rc1
f0bf79bd9 copy edits
a7c726951 Release Notes for 2.0.0_rc1
5f0468ba4 Merge pull request #330 from sartography/updates-for-2.0-release
b9ad24406 Mostly minor edits
e284dd8e2 corrections and tweaks to documentation
4b2e62600 add more examples
1ea258c6a update spiffworkflow concepts
851d7cdf6 fix a few bugs I found while testing the example repo
7a0a6bdf8 update bpmn docs
07c153f2d save/restore nested subprocess tests
340e9983b Merge branch 'main' of github.com:sartography/spiffworkflow into main
618afbc59 It is rare to submit an update that touches upon both religion and the origins of the universe. I think, for the sake of supporting all view points we must offer the possibility that there can be a thing that is not a child, but rather the beginning of all childen, that there is a chicken to the first egg, a single original big bank.
a68dec77e use raw strings for regexes using escape sequences w/ burnettk
4644f2810 Merge pull request #329 from sartography/task/remove-deprecated-functions
ca65602c0 correct typo in filename
39ab83f1f remove one deprecated and unused feature
23d54e524 Merge pull request #328 from sartography/improvement/task-spec-attributes
544614aa9 change dmn bpmn_id method to property
12ad185a4 update bpmnworkflow.waiting_events to use classname
aec77097d fix some typos & add a few missing licenses
4b87c6d0c add some changes that didn't get included in the merge commit
965a5d4e1 Merge branch 'main' into improvement/task-spec-attributes
a844b34f9 alternate bomnworkflow.cancel
0a455cdd2 Merge pull request #327 from sartography/feature/mark_tasks_in_sub_workflows_as_future_if_reseting_to_a_task_before_subworkflow
2bda992aa cancel tasks in subprocesses and return cancelled tasks
309937362 take account that we reset the parent when checking all sub-process executions.
d4bcf1290 handle nested subprocesses when resetting tasks
032bedea6 reset subprocess task when resetting a task inside the subprocess
3a6abe157 change reset workflow to drop tasks and re-predict
e9cd65757 move exceptions for bpmn into bpmn package
e654f2ff1 add bpmn_id and bpmn_name attributes to task specs
74bb9cf1a Found that tasks within a sub-workflow were left in a state of "READY" after resetting to task before the sub-workflow.
957a8faec make all task specs in bpmn processes bpmn tasks
b6070005c create actual mixin classes & improve package structure
666a9e4e5 Merge pull request #326 from sartography/feature/boundary_event_reset_fix
9fe5ae4ad Whenever a task is reset who's parent is a "_ParentBoundaryEvent" class, reset to that parent boundary event instead, and execute it, so that all the boundary events are reset to the correct point as well.
fbc071af5 remove 'is_engine_step' and use existing 'manual' attribute instead
0d8e53a25 remove unused attributes, minor parser improvements
6ae98b585 Merge pull request #325 from sartography/bugfix/make-data-objects-available-to-gateways
cefcd3733 make data objects available to gateways
6060fe778 Merge pull request #324 from sartography/task/update-license
efa24bed2 update license
56271f7f7 Merge pull request #323 from sartography/bugfix/handle-dash-in-dmn
6de4e7e01 Merge pull request #322 from sartography/improvement/remove-celery
6ee0668cb remove unnecessary dependencies in test
7ceae68c2 change literal '-' in DMN input to None
4cffc7e7a remove celery task and dependency
580d6e516 Merge pull request #321 from sartography/improvement/allow-duplicate-subprocess-names
e4440d4df remove legacy signavio parser
477a23184 remove absolute imports from tests failing in CI
15a812a92 use process ids only when storing process specs
abaf1b9e9 move parallel gateway tests to their own package
29fd2d0d9 remove some redundant, unused, or unnecessary tests & consolidate others
fda1480bc remove unused CORRELATE attribute from tests
21a2fdbee remove signavio files
299c2613c Merge pull request #320 from sartography/parser_funcs
01afc9f6e PR feedback
646737834 Cleanup
dfd3f8214 Add same methods for dmn
764e33ccd Rename file, fix tests
9646abca4 Add bpmn in memory parser functions and tests
58f6bd317 Merge pull request #319 from sartography/feature/better_task_order_for_sub_processes
fd7c9308f By swapping the order of these lines, we can assure that a call activity is returned BEFORE the tasks that it contains, rather than after it.
0a7ec19d6 Merge pull request #318 from sartography/feature/optionally-skip-call-activities-when-parsing
3430a2e9f add option to skip parsing call activities
1b1da1dd2 Merge pull request #317 from sartography/bugfix/non-bpmn-tutorial
e82345d68 remove some bpmn-related stuff from core serializer
6f9bc279c use name for inputs/outputs in base serializer -- not sure why this was ever changed

git-subtree-dir: SpiffWorkflow
git-subtree-split: 01a25fc3f829786c4b65d19fd0fda408de37c79f
This commit is contained in:
burnettk 2023-05-29 17:31:34 -04:00
parent c4f4e008d8
commit c904ee907b
270 changed files with 4407 additions and 149128 deletions

32
.github/workflows/tests.yaml vendored Normal file
View File

@ -0,0 +1,32 @@
name: Unit Tests in Python 3.8, 3.9, 3.10, 3.11
on: [push]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.8", "3.9", "3.10", "3.11"]
steps:
- uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install ruff pytest
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
# - name: Lint with ruff
# run: |
# stop the build if there are Python syntax errors or undefined names
# ruff --format=github --select=E9,F63,F7,F82 --target-version=py37 .
# default set of ruff rules with GitHub Annotations
# ruff --format=github --target-version=py37 .
- name: Test with pytest
run: |
python -m unittest discover -v -s ./tests/SpiffWorkflow/ -p *Test.py

View File

@ -1,7 +0,0 @@
sonar.organization=sartography
sonar.projectKey=sartography_SpiffWorkflow
sonar.host.url=https://sonarcloud.io
sonar.exclusions=*.bpmn,*.dmn,doc/**
sonar.sources=SpiffWorkflow
sonar.test.inclusions=tests
sonar.python.coverage.reportPaths=tests/SpiffWorkflow/coverage.xml

View File

@ -57,4 +57,4 @@ New versions of SpiffWorkflow are automatically published to PyPi whenever
a maintainer of our GitHub repository creates a new release on GitHub. This
is managed through GitHub's actions. The configuration of which can be
found in .github/workflows/....
Just create a release in GitHub that mathches the release number in doc/conf.py
Just create a release in GitHub that matches the release number in doc/conf.py

View File

@ -19,9 +19,7 @@ strategy for building Low-Code applications.
## Build status
[![Build Status](https://travis-ci.com/sartography/SpiffWorkflow.svg?branch=master)](https://travis-ci.org/sartography/SpiffWorkflow)
[![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=sartography_SpiffWorkflow&metric=alert_status)](https://sonarcloud.io/dashboard?id=sartography_SpiffWorkflow)
[![Coverage](https://sonarcloud.io/api/project_badges/measure?project=sartography_SpiffWorkflow&metric=coverage)](https://sonarcloud.io/dashboard?id=sartography_SpiffWorkflow)
[![Maintainability Rating](https://sonarcloud.io/api/project_badges/measure?project=sartography_SpiffWorkflow&metric=sqale_rating)](https://sonarcloud.io/dashboard?id=sartography_SpiffWorkflow)
[![SpiffWorkflow](https://github.com/sartography/SpiffWorkflow/actions/workflows/tests.yaml/badge.svg)](https://github.com/sartography/SpiffWorkflow/actions/workflows/tests.yaml)
[![Documentation Status](https://readthedocs.org/projects/spiffworkflow/badge/?version=latest)](http://spiffworkflow.readthedocs.io/en/latest/?badge=latest)
[![Issues](https://img.shields.io/github/issues/sartography/spiffworkflow)](https://github.com/sartography/SpiffWorkflow/issues)
[![Pull Requests](https://img.shields.io/github/issues-pr/sartography/spiffworkflow)](https://github.com/sartography/SpiffWorkflow/pulls)

138
RELEASE_NOTES.md Normal file
View File

@ -0,0 +1,138 @@
## What's Changed
We've done a lot of work over the last 8 months to the SpiffWorkflow library as we've developed [SpiffArena](https://www.spiffworkflow.org/), a general purpose workflow management system built on top of this library.
This has resulted in just a handful of new features.
Our main focus was on making SpiffWorkflow more predictable, easier to use, and internally consistent.
## Breaking Changes from 1.x:
* We heavily refactored the way we handle multi-instance tasks internally. This will break any serialized workflows that contain multi-instance tasks.
* Internal structure of our code, the names classes, and common methods have changed. Please see our [ReadTheDocs] (https://readthedocs.org/projects/spiffworkflow/) documenation for version 2.0.0.
## Features and Improvements
### Task States, Transitions, Hooks, and Execution
Previous to 2.0, SpiffWorklow was a little weird about its states, performing the actual execution in the on_complete() hook.
This was VERY confusing.
Tasks now have a _run() command separate from state change hooks.
The return value of the _run() command can be true (worked), false (failure), or None (not yet done).
This opens the door for better overall state management at the moment it is most critical (when the task is actually executing).
We also added new task state called "STARTED" that describes when a task was started, but hasn't finished yet, an oddly missing state in previous versions.
* Improvement/execution and serialization cleanup by @essweine in https://github.com/sartography/SpiffWorkflow/pull/289
* Bugfix/execute tasks on ready by @essweine in https://github.com/sartography/SpiffWorkflow/pull/303
* Feature/standardize task execution by @essweine in https://github.com/sartography/SpiffWorkflow/pull/307
* do not execute boundary events in catch by @essweine in https://github.com/sartography/SpiffWorkflow/pull/312
* Feature/new task states by @essweine in https://github.com/sartography/SpiffWorkflow/pull/315
### Improved Events
We refactored the way we handle events, making them more powerful and adaptable.
Timer events are now parsed according to the [ISO 8601 standard](https://en.wikipedia.org/wiki/ISO_8601).
* Feature/multiple event definition by @essweine in https://github.com/sartography/SpiffWorkflow/pull/268
* hacks to handle timer events like regular events by @essweine in https://github.com/sartography/SpiffWorkflow/pull/273
* Feature/improved timer events by @essweine in https://github.com/sartography/SpiffWorkflow/pull/284
* reset boundary events in loops by @essweine in https://github.com/sartography/SpiffWorkflow/pull/294
* Bugfix/execute event gateways on ready by @essweine in https://github.com/sartography/SpiffWorkflow/pull/308
### Improved Multi-Instance Tasks
We refactored how Multi-instance tasks are handled internally, vastly simplifying their representation during execution and serialization.
No more 'phantom gateways.'
* Feature/multiinstance refactor by @essweine in https://github.com/sartography/SpiffWorkflow/pull/292
### Improved SubProcesses
SpiffWorkflow did not previously distinguish between a Call Activity and a SubProcess, but they handle Data Objects very differently.
A SubProcess is now able to access its parent data objects, a Call Activity can not.
We also wanted the ability to execute Call Activities independently of the parent process.
* Bugfix/subprocess access to data objects by @essweine in https://github.com/sartography/SpiffWorkflow/pull/296
* start workflow while subprocess is waiting by @essweine in https://github.com/sartography/SpiffWorkflow/pull/302
* use same data objects & references in subprocesses after deserialization by @essweine in https://github.com/sartography/SpiffWorkflow/pull/314
### Improved Data Objects / Data Stores
This work will continue in subsequent releases, but we have added support for Data Stores, and it is possible to provide your own implementations.
* Data stores by @jbirddog in https://github.com/sartography/SpiffWorkflow/pull/298
* make data objects available to gateways by @essweine in https://github.com/sartography/SpiffWorkflow/pull/325
### Improved Inclusive Gateways
We added support for Inclusive Gateways.
* Feature/inclusive gateway support by @essweine in https://github.com/sartography/SpiffWorkflow/pull/286
### Pre and Post Script Fixes
We previously supported adding a pre-script or post-script to any task but there were a few lingering bugs that needed fixing.
* parse spiff script extensions in service tasks by @essweine in https://github.com/sartography/SpiffWorkflow/pull/257
* pass script to workflow task exec exception by @essweine in https://github.com/sartography/SpiffWorkflow/pull/258
* update execution order for postscripts by @essweine in https://github.com/sartography/SpiffWorkflow/pull/259
### DMN Improvements
We now support a new hit policy of "COLLECT" which allows you to match on an array of items. DMN support is still limited, but
we are making headway. We would love to know if people are using these features.
* Support for the "COLLECT" hit policy. by @danfunk in https://github.com/sartography/SpiffWorkflow/pull/267
* Bugfix/handle dash in DMN by @essweine in https://github.com/sartography/SpiffWorkflow/pull/323
### BPMN Validation
We improved validation of BPMN and DMN Files to catch errors earlier.
* Feature/xml validation by @essweine and @danfunk in https://github.com/sartography/SpiffWorkflow/pull/256
### New Serializer
There are some breaking changes in the new serializer, but it is much faster and more stable. We do attempt to upgrade
your serialized workflows to the new format, but you will definitely encounter issues if you were using multi-instance tasks.
* update serializer version by @essweine in https://github.com/sartography/SpiffWorkflow/pull/277
* Feature/remove old serializer by @essweine in https://github.com/sartography/SpiffWorkflow/pull/278
### Lightning Fast, Stable Tests
* Fix ResourceWarning: unclosed file BpmnParser.py:60 by @jbirddog in https://github.com/sartography/SpiffWorkflow/pull/270
* Option to run tests in parallel by @jbirddog in https://github.com/sartography/SpiffWorkflow/pull/271
### Better Errors
* Feature/better errors by @danfunk in https://github.com/sartography/SpiffWorkflow/pull/283
* Workflow Data Exceptions were broken in the previous error refactor. … by @danfunk in https://github.com/sartography/SpiffWorkflow/pull/287
* added an exception for task not found w/ @burnettk by @jasquat in https://github.com/sartography/SpiffWorkflow/pull/310
* give us a better error if for some reason a task does not exist by @burnettk in https://github.com/sartography/SpiffWorkflow/pull/311
### Flexible Data Management
* Allow for other PythonScriptEngine environments besides task data by @jbirddog in https://github.com/sartography/SpiffWorkflow/pull/288
### Various Enhancements
Make it easier to reference SpiffWorkflow library classes from your own code.
* Feature/add init to schema by @jasquat in https://github.com/sartography/SpiffWorkflow/pull/260
* cleaning up code smell by @danfunk in https://github.com/sartography/SpiffWorkflow/pull/261
* Feature/cleanup task completion by @essweine in https://github.com/sartography/SpiffWorkflow/pull/263
* disambiguate DMN expressions by @essweine in https://github.com/sartography/SpiffWorkflow/pull/264
* Add in memory BPMN/DMN parser functions by @jbirddog in https://github.com/sartography/SpiffWorkflow/pull/320
### Better Introspection
Added the ability to ask SpiffWorkflow some useful questions about a specification such as, "What call activities does this depend on?",
"What messages does this process send and receive", and "What lanes exist on this workflow specification?"
* Parser Information about messages, correlation keys, and the presence of lanes by @danfunk in https://github.com/sartography/SpiffWorkflow/pull/262
* Called elements by @jbirddog in https://github.com/sartography/SpiffWorkflow/pull/316
### Code Cleanup
* Improvement/task spec attributes by @essweine in https://github.com/sartography/SpiffWorkflow/pull/328
* update license by @essweine in https://github.com/sartography/SpiffWorkflow/pull/324
* Feature/remove unused BPMN attributes and methods by @essweine in https://github.com/sartography/SpiffWorkflow/pull/280
* Improvement/remove camunda from base and misc cleanup by @essweine in https://github.com/sartography/SpiffWorkflow/pull/295
* remove minidom by @essweine in https://github.com/sartography/SpiffWorkflow/pull/300
* Feature/remove loop reset by @essweine in https://github.com/sartography/SpiffWorkflow/pull/305
* Feature/create core test package by @essweine in https://github.com/sartography/SpiffWorkflow/pull/306
* remove celery task and dependency by @essweine in https://github.com/sartography/SpiffWorkflow/pull/322
* remove one deprecated and unused feature by @essweine in https://github.com/sartography/SpiffWorkflow/pull/329
* change the order of tasks when calling get_tasks() by @danfunk in https://github.com/sartography/SpiffWorkflow/pull/319
### Improved Documentation
* Fixes grammar, typos, and spellings by @rachfop in https://github.com/sartography/SpiffWorkflow/pull/291
* Updates for 2.0 release by @essweine in https://github.com/sartography/SpiffWorkflow/pull/330
* Bugfix/non BPMN tutorial by @essweine in https://github.com/sartography/SpiffWorkflow/pull/317
### Bug Fixes
* correct xpath for extensions by @essweine in https://github.com/sartography/SpiffWorkflow/pull/265
* prevent output associations from being removed twice by @essweine in https://github.com/sartography/SpiffWorkflow/pull/275
* fix for workflowspec dump by @subhakarks in https://github.com/sartography/SpiffWorkflow/pull/282
* add checks for len == 0 when copying based on io spec by @essweine in https://github.com/sartography/SpiffWorkflow/pull/297
* Improvement/allow duplicate subprocess names by @essweine in https://github.com/sartography/SpiffWorkflow/pull/321
* Resets to tasks with Boundary Events by @danfunk in https://github.com/sartography/SpiffWorkflow/pull/326
* Sub-workflow tasks should be marked as "Future" when resetting to a task before the sub-process. by @danfunk in https://github.com/sartography/SpiffWorkflow/pull/327
## New Contributors
* @subhakarks made their first contribution in https://github.com/sartography/SpiffWorkflow/pull/282
* @rachfop made their first contribution in https://github.com/sartography/SpiffWorkflow/pull/291
**Full Changelog**: https://github.com/sartography/SpiffWorkflow/compare/v1.2.1...v2.0.0

View File

@ -1,48 +1,91 @@
A More In-Depth Look at Some of SpiffWorkflow's Features
========================================================
BPMN Task Specs
---------------
BPMN Tasks inherit quite a few attributes from :code:`SpiffWorkflow.specs.base.TaskSpec`, but only a few are used.
* `name`: the unique id of the TaskSpec, and it will correspond to the BPMN ID if that is present
* `description`: we use this attribute to provide a description of the BPMN type (the text that appears here can be overridden in the parser)
* `inputs`: a list of TaskSpec `names` that are parents of this TaskSpec
* `outputs`: a list of TaskSpec `names` that are children of this TaskSpec
* `manual`: :code:`True` if human input is required to complete tasks associated with this TaskSpec
BPMN Tasks have the following additional attributes.
* `bpmn_id`: the ID of the BPMN Task (this will be :code:`None` if the task is not visible on the diagram)
* `bpmn_name`: the BPMN name of the Task
* `lane`: the lane of the BPMN Task
* `documentation`: the contents of the BPMN `documentation` element for the Task
* `data_input_associations`: a list of incoming data object references
* `data_output_associtions`: a list of outgoing data object references
* `io_specification`: the BPMN IO specification of the Task
Filtering Tasks
---------------
In our earlier example, all we did was check the lane a task was in and display
it along with the task name and state.
Tasks by Lane
^^^^^^^^^^^^^
Let's take a look at a sample workflow with lanes:
.. figure:: figures/lanes.png
:scale: 30%
:align: center
Workflow with lanes
To get all the tasks that are ready for the 'Customer' workflow, we could
specify the lane when retrieving ready user tasks:
The :code:`workflow.get_ready_user_tasks` method optionally takes the argument `lane`, which can be used to
restrict the tasks returned to only tasks in that lane.
.. code:: python
ready_tasks = workflow.get_ready_user_tasks(lane='Customer')
If there were no tasks ready for the 'Customer' lane, you would get an empty list,
and of course if you had no lane that was labeled 'Customer' you would *always* get an
empty list.
will return only tasks in the 'Customer' lane in our example workflow.
We can also get a list of tasks by state.
Tasks by Spec Name
^^^^^^^^^^^^^^^^^^
We need to import the :code:`Task` object (unless you want to memorize which numbers
correspond to which states).
To retrieve a list of tasks associated with a particular task spec, use :code:`workflow.get_tasks_from_spec_name`
.. code:: python
tasks = workflow.get_tasks_from_spec_name('customize_product')
will return a list containing the Call Actitivities for the customization of a product in our example workflow.
.. note::
The `name` paramter here refers to the task spec name, not the BPMN name (for visible tasks, this will
be the same as the `bpmn_id`)
Tasks by State
^^^^^^^^^^^^^^
We need to import the :code:`TaskState` object (unless you want to memorize which numbers correspond to which states).
.. code:: python
from SpiffWorkflow.task import TaskState
tasks = workflow.get_tasks(TaskState.COMPLETED)
To get a list of completed tasks
will return a list of completed tasks.
See :doc:`../concepts` for more information about task states.
Tasks in a Subprocess or Call Activity
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The :code:`BpmnWorkflow` class maintains a dictionary of subprocesses (the key is the `id` of the Call Activity or
Subprocess Task). :code:`workflow.get_tasks` will start at the top level workflow and recurse through the subprocesses
to create a list of all tasks. It is also possible to start from a particular subprocess:
.. code:: python
tasks = workflow.get_tasks(TaskState.COMPLETED)
tasks = workflow.get_tasks_from_spec_name('customize_product')
subprocess = workflow.get_subprocess(tasks[0])
subprocess_tasks = workflow.get_tasks(workflow=subprocess)
The tasks themselves are not particularly intuitive to work with. So SpiffWorkflow
provides some facilities for obtaining a more user-friendly version of upcoming tasks.
will limit the list of returned tasks to only those in the first product customization.
.. note::
Each :code:`Task` object has a reference to its workflow; so with a Task inside a subprocess, we can call
:code:`workflow.get_tasks(workflow=task.workflow)` to start from our current workflow.
Logging
-------
@ -50,7 +93,7 @@ Logging
Spiff provides several loggers:
- the :code:`spiff` logger, which emits messages when a workflow is initialized and when tasks change state
- the :code:`spiff.metrics` logger, which emits messages containing the elapsed duration of tasks
- the :code:`spiff.data` logger, which emits a message when task or workflow data is updated.
- the :code:`spiff.data` logger, which emits a message when :code:`task.update_data` is called or workflow data is retrieved or set.
Log level :code:`INFO` will provide reasonably detailed information about state changes.
@ -62,203 +105,164 @@ we define a custom log level
.. code:: python
logging.addLevelName(15, 'DATA_LOG')
logging.addLevelName(15, 'DATA')
so that we can see the task data in the logs without fully enabling debugging.
The workflow runners take an `-l` argument that can be used to specify the logging level used
when running the example workflows.
The workflow runners take an `-l` argument that can be used to specify the logging level used when running the example workflows.
We'll write the logs to a file called `data.log` instead of the console to avoid printing very long messages during the workflow.
Our logging configuration code can be found in `runner/shared.py`. Most of the code is about logging
configuration in Python rather than anything specific to SpiffWorkflow, so we won't go over it in depth.
Parsing
-------
Each of the BPMN pacakges (:code:`bpmn`, :code:`spiff`, or :code:`camunda`) has a parser that is preconfigured with
the specs in that package (if a particular TaskSpec is not implemented in the package, :code:`bpmn` TaskSpec is used).
See the example in :doc:`synthesis` for the basics of creating a parser. The parser can optionally be initialized with
- a set of namespaces (useful if you have custom extensions)
- a BPMN Validator (the one in the :code:`bpmn` package validates against the BPMN 2.0 spec)
- a mapping of XML tag to Task Spec Descriptions. The default set of descriptions can be found in
:code:`SpiffWorkflow.bpmn.parser.spec_descriptions`. These values will be added to the Task Spec in the `description` attribute
and are intended as a user-friendly description of what the task is.
The :code:`BpmnValidator` can be used and extended independently of the parser as well; call :code:`validate` with
an :code:`lxml` parsed tree.
Loading BPMN Files
^^^^^^^^^^^^^^^^^^
In addition to :code:`load_bpmn_file`, there are similar functions :code:`load_bpmn_str` which can load the XML from a string, and
:code:`load_bpmn_io`, which can load XML from any object implementing the IO interface, and :code:`add_bpmn_xml`, which can load
BPMN specs from an :code:`lxml` parsed tree.
Dependencies
^^^^^^^^^^^^
The following methods are available for discovering the names of processes and DMN files that may be defined externally:
- :code:`get_subprocess_specs`: Returns a mapping of name -> :code:`BpmnWorkflowSpec` for any Call Activities referenced by the
provided spec (searches recursively)
- :code:`find_all_spec`: Returns a mapping of name -> :code:`BpmnWorkflowSpec` for all processes used in all files that have been
provided to the parser at that point.
- :code:`get_process_dependences`: Returns a list of process IDs referenced by the provided process ID
- :code:`get_dmn_dependencies`: Returns a list of DMN IDs referenced by the provided process ID
Serialization
-------------
.. warning::
The :code:`BpmnWorkflowSerializer` has two components
Serialization Changed in Version 1.1.7.
Support for pre-1.1.7 serialization will be dropped in a future release.
The old serialization method still works, but it is deprecated.
To migrate your system to the new version, see "Migrating between
serialization versions" below.
* the `workflow_spec_converter` (which handles serialization of objects that SpiffWorkflow knows about)
* the `data_converter` (which handles serialization of custom objects)
So far, we've only considered the context where we will run the workflow from beginning to end in one
setting. This may not always be the case, we may be executing the workflow in the context of a web server where we
may have a user request a web page where we open a specific workflow that we may be in the middle of, do one step of
that workflow and then the user may be back in a few minutes, or maybe a few hours depending on the application.
Unless you have overriden any of TaskSpecs with custom specs, you should be able to use the serializer
configuration from the package you are importing the parser from (:code:`bpmn`, :code:`spiff`, or :code:`camunda`).
See :doc:`synthesis` for an example.
The :code:`BpmnWorkflowSerializer` class contains a serializer for a workflow containing only standard BPMN Tasks.
Since we are using custom task classes (the Camunda :code:`UserTask` and the DMN :code:`BusinessRuleTask`),
we'll need to supply serializers for those task specs as well.
Serializing Custom Objects
^^^^^^^^^^^^^^^^^^^^^^^^^^
Strictly speaking, these are not serializers per se: they actually convert the tasks into dictionaries of
JSON-serializable objects. Conversion to JSON is done only as the last step and could easily be replaced with some
other output format.
In `Custom Script Engines`_ , we add some custom methods and objects to our scripting environment. We create a simple
class (a :code:`namedtuple`) that holds the product information for each product.
We'll need to configure a Workflow Spec Converter with our custom classes, as well as an optional
custom data converter.
We'd like to be able to save and restore our custom object.
.. code:: python
def create_serializer(task_types, data_converter=None):
ProductInfo = namedtuple('ProductInfo', ['color', 'size', 'style', 'price'])
wf_spec_converter = BpmnWorkflowSerializer.configure_workflow_spec_converter(task_types)
return BpmnWorkflowSerializer(wf_spec_converter, data_converter)
def product_info_to_dict(obj):
return {
'color': obj.color,
'size': obj.size,
'style': obj.style,
'price': obj.price,
}
We'll call this from our main script:
def product_info_from_dict(dct):
return ProductInfo(**dct)
.. code:: python
registry = DictionaryConverter()
registry.register(ProductInfo, product_info_to_dict, product_info_from_dict)
serializer = create_serializer([ UserTaskConverter, BusinessRuleTaskConverter ], custom_data_converter)
Here we define two functions, one for turning our object into a dictionary of serializable objects, and one for recreating
the object from the dictionary representation we created.
We first configure a workflow spec converter that uses our custom task converters, and then we create
a :code:`BpmnWorkflowSerializer` from our workflow spec and data converters.
We initialize a :code:`DictionaryConverter` and `register` the class and methods.
We'll give the user the option of dumping the workflow at any time.
Registering an object sets up relationships between the class and the serialization and deserialization methods. We go
over how this works in a little more detail in `Custom Serialization in More Depth`_.
.. code:: python
It is also possible to bypass using a :code:`DictionaryConverter` at all for the data serialization process (but not for
the spec serialization process). The only requirement for the the `data_converter` is that it implement the methods
filename = input('Enter filename: ')
state = serializer.serialize_json(workflow)
with open(filename, 'w') as dump:
dump.write(state)
- `convert`, which takes an object and returns something JSON-serializable
- `restore`, which takes a serialized version and returns an object
We'll ask them for a filename and use the serializer to dump the state to that file.
Serialization Versions
^^^^^^^^^^^^^^^^^^^^^^
To restore the workflow:
As we make changes to Spiff, we may change the serialization format. For example, in 1.2.1, we changed
how subprocesses were handled interally in BPMN workflows and updated how they are serialized and we upraded the
serializer version to 1.1.
.. code:: python
As we release SpiffWorkflow 2.0, there are several more substantial changes, and we'll upgrade the serializer version to 1.2.
if args.restore is not None:
with open(args.restore) as state:
wf = serializer.deserialize_json(state.read())
Since workflows can contain arbitrary data, and even SpiffWorkflow's internal classes are designed to be customized in ways
that might require special serialization and deserialization, it is possible to override the default version number, to
provide users with a way of tracking their own changes. This can be accomplished by setting the `VERSION` attribute on
the :code:`BpmnWorkflowSerializer` class.
The workflow serializer is designed to be flexible and modular, and as such is a little complicated. It has
two components:
- a workflow spec converter (which handles workflow and task specs)
- a data converter (which handles workflow and task data).
The default workflow spec converter likely to meet your needs, either on its own, or with the inclusion of
:code:`UserTask` and :code:`BusinessRuleTask` in the :code:`camnuda` or :code:`spiff` and :code:`dmn` subpackages
of this library, and all you'll need to do is add them to the list of task converters, as we did above.
However, the default data converter is very simple, adding only JSON-serializable conversions of :code:`datetime`
and :code:`timedelta` objects (we make these available in our default script engine) and UUIDs. If your
workflow or task data contains objects that are not JSON-serializable, you'll need to extend ours, or extend
its base class to create one of your own.
To extend ours:
1. Subclass the base data converter
2. Register classes along with functions for converting them to and from dictionaries
.. code:: python
from SpiffWorkflow.bpmn.serializer.dictionary import DictionaryConverter
class MyDataConverter(DictionaryConverter):
def __init__(self):
super().__init__()
self.register(MyClass, self.my_class_to_dict, self.my_class_from_dict)
def my_class_to_dict(self, obj):
return obj.__dict__
def my_class_from_dict(self, dct):
return MyClass(**dct)
More information can be found in the class documentation for the
`default converter <https://github.com/sartography/SpiffWorkflow/blob/main/SpiffWorkflow/bpmn/serializer/bpmn_converters.py>`_
and its `base class <https://github.com/sartography/SpiffWorkflow/blob/main/SpiffWorkflow/bpmn/serializer/dictionary.py>`_
.
You can also replace ours entirely with one of your own. If you do so, you'll need to implement `convert` and
`restore` methods. The former should return a JSON-serializable representation of your workflow data; the
latter should recreate your data from the serialization.
If you have written any custom task specs, you'll need to implement task spec converters for those as well.
Task Spec converters are also based on the :code:`DictionaryConverter`. You should be able to use the
`BpmnTaskSpecConverter <https://github.com/sartography/SpiffWorkflow/blob/main/SpiffWorkflow/bpmn/serializer/bpmn_converters.py>`_
as a basis for your custom specs. It provides some methods for extracting attributes from Spiff base classes as well as
standard BPNN attributes from tasks that inherit from :code:`BMPNSpecMixin`.
The `Camunda User Task Converter <https://github.com/sartography/SpiffWorkflow/blob/main/SpiffWorkflow/camunda/serializer/task_spec_converters.py>`_
should provide a simple example of how you might create such a converter.
Migrating Between Serialization Versions
----------------------------------------
Old (Non-Versioned) Serializer
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Prior to Spiff 1.1.7, the serialized output did not contain a version number.
.. code:: python
old_serializer = BpmnSerializer() # the deprecated serializer.
# new serializer, which can be customized as described above.
serializer = BpmnWorkflowSerializer(version="MY_APP_V_1.0")
The new serializer has a :code:`get_version` method that will read the version
back out of the serialized json. If the version isn't found, it will return
:code:`None`, and you can then assume it is using the old style serializer.
.. code:: python
version = serializer.get_version(some_json)
if version == "MY_APP_V_1.0":
workflow = serializer.deserialize_json(some_json)
else:
workflow = old_serializer.deserialize_workflow(some_json, workflow_spec=spec)
If you are not using any custom tasks and do not require custom serialization, then you'll be able to
serialize the workflow in the new format:
.. code:: python
new_json = serializer.serialize_json(workflow)
However, if you use custom tasks or data serialization, you'll also need to specify workflow spec or data
serializers, as in the examples in the previous section, before you'll be able to serialize with the new serializer.
The code would then look more like this:
.. code:: python
from SpiffWorkflow.camunda.serializer import UserTaskConverter
old_serializer = BpmnSerializer() # the deprecated serializer.
# new serializer, with customizations
wf_spec_converter = BpmnWorkflowSerializer.configure_workflow_spec_converter([UserTaskConverter])
data_converter = MyDataConverter
serializer = BpmnWorkflowSerializer(wf_spec_converter, data_converter, version="MY_APP_V_1.0")
version = serializer.get_version(some_json)
if version == "MY_APP_V_1.0":
workflow = serializer.deserialize_json(some_json)
else:
workflow = old_serializer.deserialize_workflow(some_json, workflow_spec=spec)
new_json = serializer.serialize_json(workflow)
Because the serializer is highly customizable, we've made it possible for you to manage your own versions of the
serialization. You can do this by passing a version number into the serializer, which will be embedded in the
json of all workflows. This allows you to modify the serialization and customize it over time, and still manage
the different forms as you make adjustments without leaving people behind.
Versioned Serializer
^^^^^^^^^^^^^^^^^^^^
As we make changes to Spiff, we may change the serialization format. For example, in 1.1.8, we changed
how subprocesses were handled interally in BPMN workflows and updated how they are serialized. If you have
not overridden our version number with one of your own, the serializer will transform the 1.0 format to the
new 1.1 format.
If you have not provided a custom version number, SpiffWorkflow wil attempt to migrate your workflows from one version
to the next if they were serialized in an earlier format.
If you've overridden the serializer version, you may need to incorporate our serialization changes with
your own. You can find our conversions in
`version_migrations.py <https://github.com/sartography/SpiffWorkflow/blob/main/SpiffWorkflow/bpmn/serializer/version_migration.py>`_
`SpiffWorkflow/bpmn/serilaizer/migrations <https://github.com/sartography/SpiffWorkflow/tree/main/SpiffWorkflow/bpmn/serializer/migration>`_
These are broken up into functions that handle each individual change, which will hopefully make it easier to incoporate them
into your upgrade process, and also provides some documentation on what has changed.
Custom Serialization in More Depth
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Both of the serializer components mentioned in `Serialization`_ are based on the :code:`DictionaryConverter`. Let's import
it and create one and register a type:
.. code:: python
from datetime import datetime
from SpiffWorkflow.bpmn.serializer.helpers.dictionary import DictionaryConverter
registry = DictionaryConverter()
registry.register(
datetime.
lambda dt: {'value': dt.isoformat() },
lambda dct: datetime.fromisoformat(dct['value'])
)
The arguemnts to :code:`register` are:
* `cls`: the class to be converted
* `to_dict`: a function that returns a dictionary containing JSON-serializable objects
* `from_dict`: a function that take the output of `to_dict` and restores the original object
When the :code:`register` method is called, a `typename` is created and maps are set up between `cls` and `to_dict`
function, `cls` and `typename`, and `typename` and `from_dict`.
When :code:`registry.convert` is called on an object, the `cls` is use to retrieve the `to_dict` function and the
`typename`. The `to_dict` funciton is called on the object and the `typename` is added to the resulting dictionary.
When :code:`registry.restore` is called with a dictionary, it is checked for a `typename` key, and if one exists, it
is used to retrieve the `from_dict` function and the dictionary is passed to it.
If an object is not recognized, it will be passed on as-is.
Custom Script Engines
---------------------
@ -269,20 +273,30 @@ security reasons.
.. warning::
The default script engine does little to no sanitization and uses :code:`eval`
and :code:`exec`! If you have security concerns, you should definitely investigate
replacing the default with your own implementation.
By default, the scripting environment passes input directly to :code:`eval` and :code:`exec`! In most
cases, you'll want to replace the default scripting environment with one of your own.
We'll cover a simple extension of custom script engine here. There is also an example of
a similar engine based on `RestrictedPython <https://restrictedpython.readthedocs.io/en/latest/>`_
included alongside this example.
Files referenced in this section:
The default script engine does not import any objects.
* `script_engine.py <https://github.com/sartography/spiff-example-cli/blob/main/runner/script_engine.py>`_
* `product_info.py <https://github.com/sartography/spiff-example-cli/blob/main/runner/product_info.py>`_
* `subprocess.py <https://github.com/sartography/spiff-example-cli/blob/main/runner/subprocess.py>`_
* `spiff-bpmn-runner.py <https://github.com/sartography/spiff-example-cli/blob/main/spiff-bpmn-runner.py>`_
You could add functions or classes from the standard python modules or any code you've
implemented yourself. Your global environment can be passed in using the `default_globals`
argument when initializing the script engine. In our RestrictedPython example, we use their
`safe_globals` which prevents users from executing some potentially unsafe operations.
The following example replaces the default global enviroment with the one provided by
`RestrictedPython <https://restrictedpython.readthedocs.io/en/latest/>`_
.. code:: python
from RestrictedPython import safe_globals
from SpiffWorkflow.bpmn.PythonScriptEngine import PythonScriptEngine
from SpiffWorkflow.bpmn.PythonScriptEngineEnvironment import TaskDataEnvironment
restricted_env = TaskDataEnvironment(safe_globals)
restricted_script_engine = PythonScriptEngine(environment=restricted_env)
Another reason you might want to customize the scripting environment is to provide access to custom
classes or functions.
In our example models so far, we've been using DMN tables to obtain product information. DMN
tables have a **lot** of uses so we wanted to feature them prominently, but in a simple way.
@ -293,20 +307,21 @@ our diagram (although it is much easier to modify the BPMN diagram than to chang
itself!). Our shipping costs would not be static, but would depend on the size of the order and
where it was being shipped -- maybe we'd query an API provided by our shipper.
SpiffWorkflow is obviously **not** going to know how to make a call to **your** database or
make API calls to **your** vendors. However, you can implement the calls yourself and make them
available as a method that can be used within a script task.
SpiffWorkflow is obviously **not** going to know how to query **your** database or make API calls to
**your** vendors. However, one way of making this functionality available inside your diagram is to
implement the calls in functions and add those functions to the scripting environment, where they
can be called by Script Tasks.
We are not going to actually include a database or API and write code for connecting to and querying
it, but we can model our database with a simple dictionary lookup since we only have 7 products
it, but since we only have 7 products we can model our database with a simple dictionary lookup
and just return the same static info for shipping for the purposes of the tutorial.
We'll define some resources in `product_info.py`
.. code:: python
from collections import namedtuple
from SpiffWorkflow.bpmn.PythonScriptEngine import PythonScriptEngine
ProductInfo = namedtuple('ProductInfo', ['color', 'size', 'style', 'price'])
INVENTORY = {
@ -325,92 +340,153 @@ and just return the same static info for shipping for the purposes of the tutori
def lookup_shipping_cost(shipping_method):
return 25.00 if shipping_method == 'Overnight' else 5.00
additions = {
'lookup_product_info': lookup_product_info,
'lookup_shipping_cost': lookup_shipping_cost
}
CustomScriptEngine = PythonScriptEngine(scripting_additions=additions)
We pass the script engine we created to the workflow when we load it.
We'll add these functions to our scripting environment in `script_engine.py`
.. code:: python
return BpmnWorkflow(parser.get_spec(process), script_engine=CustomScriptEngine)
env_globals = {
'lookup_product_info': lookup_product_info,
'lookup_shipping_cost': lookup_shipping_cost,
'datetime': datetime,
}
custom_env = TaskDataEnvironment(env_globals)
custom_script_engine = PythonScriptEngine(environment=custom_env)
We can use the custom functions in script tasks like any normal function:
.. note::
.. figure:: figures/custom_script_usage.png
We're also adding :code:`datetime`, because we added the timestamp to the payload of our message when we
set up the Message Event (see :doc:`events`)
When we initialize the runner in `spiff-bpmn-runner.py`, we'll import and use `cusrom_script_engine` as our
script engine.
We can use the custom functions in script tasks like any normal function. We've replaced the Business Rule
Task that determines product price with a script that simply checks the `price` field on our product.
.. figure:: figures/script_engine/top_level.png
:scale: 30%
:align: center
Workflow with lanes
Top Level Workflow with Custom Script Engine
And we can simplify our 'Call Activity' flows:
And we can simplify the gateways in our 'Call Activity' flows as well now too:
.. figure:: figures/call_activity_script_flow.png
.. figure:: figures/script_engine/call_activity.png
:scale: 30%
:align: center
Workflow with lanes
Call Activity with Custom Script Engine
To run this workflow:
To run this workflow (you'll have to manually change which script engine you import):
.. code-block:: console
./run.py -p order_product -b bpmn/call_activity_script.bpmn bpmn/top_level_script.bpmn
./spiff-bpmn-runner.py -p order_product -b bpmn/tutorial/top_level_script.bpmn bpmn/tutorial/call_activity_script.bpmn
It is also possible to completely replace `exec` and `eval` with something else, or to
execute or evaluate statements in a completely separate environment by subclassing the
:code:`PythonScriptEngine` and overriding `_execute` and `_evaluate`. We have examples of
executing code inside a docker container or in a celery task i this repo.
Another reason to customize the scripting enviroment is to allow it to run completely separately from
SpiffWorkflow. You might wish to do this for performance or security reasons.
MultiInstance Notes
-------------------
In our example repo, we've created a simple command line script in `runner/subprocess.py` that takes serialized global
and local environments and a script or expression to execute or evaluate. In `runner/script_engine.py`, we create
a scripting environment that runs the current :code:`execute` or :code:`evaluate` request in a subprocess with this
script. We've imported our custom methods into `subprocess.py` so they are automatically available when it is used.
**loopCardinality** - This variable can be a text representation of a
number - for example '2' or it can be the name of a variable in
task.data that resolves to a text representation of a number.
It can also be a collection such as a list or a dictionary. In the
case that it is a list, the loop cardinality is equal to the length of
the list and in the case of a dictionary, it is equal to the list of
the keys of the dictionary.
This example is needlessly complex for the work we're doing in this case, but the point of the example is to demonstrate
that this could be a Docker container with a complex environment, an HTTP API running somewhere else entirely.
If loopCardinality is left blank and the Collection is defined, or if
loopCardinality and Collection are the same collection, then the
MultiInstance will loop over the collection and update each element of
that collection with the new information. In this case, it is assumed
that the incoming collection is a dictionary, currently behavior for
working with a list in this manner is not defined and will raise an error.
.. note::
**Collection** This is the name of the collection that is created from
the data generated when the task is run. Examples of this would be
form data that is generated from a UserTask or data that is generated
from a script that is run. Currently the collection is built up to be
a dictionary with a numeric key that corresponds to the place in the
loopCardinality. For example, if we set the loopCardinality to be a
list such as ['a','b','c] the resulting collection would be {1:'result
from a',2:'result from b',3:'result from c'} - and this would be true
even if it is a parallel MultiInstance where it was filled out in a
different order.
Note that our execute method returns :code:`True`. We could check the status of our process here and return
:code:`False` to force our task into an `ERROR` state if the task failed to execute.
**Element Variable** This is the variable name for the current
iteration of the MultiInstance. In the case of the loopCardinality
being just a number, this would be 1,2,3, . . . If the
loopCardinality variable is mapped to a collection it would be either
the list value from that position, or it would be the value from the
dictionary where the keys are in sorted order. It is the content of the
element variable that should be updated in the task.data. This content
will then be added to the collection each time the task is completed.
We could also return :code:`None`
if the task is not finished; this will cause the task to go into the `STARTED` state. You would have to manually
complete a task that has been `STARTED`. The purpose of the state is to tell SpiffWorkflow your application will
handle monitoring and updating this task and other branches that do not depend on this task may proceed. It is
intended to be used with potentially long-running tasks.
Example:
In a sequential MultiInstance, loop cardinality is ['a','b','c'] and elementVariable is 'myvar'
then in the case of a sequential multiinstance the first call would
have 'myvar':'a' in the first run of the task and 'myvar':'b' in the
second.
See :doc:`../concepts` for more information about Task States and Workflow execution.
Service Tasks
-------------
Service Tasks are also executed by the workflow's script engine, but through a different method, with the help of some
custom extensions in the :code:`spiff` package:
- `operation_name`, the name assigned to the service being called
- `operation_params`, the parameters the operation requires
This is our script engine and scripting environment:
.. code:: python
service_task_env = TaskDataEnvironment({
'product_info_from_dict': product_info_from_dict,
'datetime': datetime,
})
class ServiceTaskEngine(PythonScriptEngine):
def __init__(self):
super().__init__(environment=service_task_env)
def call_service(self, operation_name, operation_params, task_data):
if operation_name == 'lookup_product_info':
product_info = lookup_product_info(operation_params['product_name']['value'])
result = product_info_to_dict(product_info)
elif operation_name == 'lookup_shipping_cost':
result = lookup_shipping_cost(operation_params['shipping_method']['value'])
else:
raise Exception("Unknown Service!")
return json.dumps(result)
service_task_engine = ServiceTaskEngine()
Instead of adding our custom functions to the enviroment, we'll override :code:`call_service` and call them directly
according to the `operation_name` that was given. The :code:`spiff` Service Task also evaluates the parameters
against the task data for us, so we can pass those in directly. The Service Task will also store our result in
a user-specified variable.
We need to send the result back as json, so we'll reuse the functions we wrote for the serializer.
The Service Task will assign the dictionary as the operation result, so we'll add a `postScript` to the Service Task
that retrieves the product information that creates a :code:`ProductInfo` instance from the dictionary, so we need to
import that too.
The XML for the Service Task looks like this:
.. code:: xml
<bpmn:serviceTask id="Activity_1ln3xkw" name="Lookup Product Info">
<bpmn:extensionElements>
<spiffworkflow:serviceTaskOperator id="lookup_product_info" resultVariable="product_info">
<spiffworkflow:parameters>
<spiffworkflow:parameter id="product_name" type="str" value="product_name"/>
</spiffworkflow:parameters>
</spiffworkflow:serviceTaskOperator>
<spiffworkflow:postScript>product_info = product_info_from_dict(product_info)</spiffworkflow:postScript>
</bpmn:extensionElements>
<bpmn:incoming>Flow_104dmrv</bpmn:incoming>
<bpmn:outgoing>Flow_06k811b</bpmn:outgoing>
</bpmn:serviceTask>
Getting this information into the XML is a little bit beyond the scope of this tutorial, as it involves more than
just SpiffWorkflow. I hand edited it for this case, but you can hardly ask your BPMN authors to do that!
Our `modeler <https://github.com/sartography/bpmn-js-spiffworkflow>`_ has a means of providing a list of services and
their parameters that can be displayed to a BPMN author in the Service Task configurtion panel. There is an example of
hard-coding a list of services in
`app.js <https://github.com/sartography/bpmn-js-spiffworkflow/blob/0a9db509a0e85aa7adecc8301d8fbca9db75ac7c/app/app.js#L47>`_
and as suggested, it would be reasonably straightforward to replace this with a API call. `SpiffArena <https://www.spiffworkflow.org/posts/articles/get_started/>`_
has robust mechanisms for handling this that might serve as a model for you.
How this all works is obviously heavily dependent on your application, so we won't go into further detail here, except
to give you a bare bones starting point for implementing something yourself that meets your own needs.
To run this workflow (you'll have to manually change which script engine you import):
.. code-block:: console
./spiff-bpmn-runner.py -p order_product -b bpmn/tutorial/top_level_service_task.bpmn bpmn/tutorial/call_activity_service_task.bpmn
Example:
In a Parallel MultiInstance, Loop cardinality is a variable that contains
{'a':'A','b':'B','c':'C'} and elementVariable is 'myvar' - when the multiinstance is ready, there
will be 3 tasks. If we choose the second task, the task.data will
contain 'myvar':'B'.

View File

@ -0,0 +1,45 @@
Events
======
Message Events
--------------
Configuring Message Events
^^^^^^^^^^^^^^^^^^^^^^^^^^
.. figure:: figures/throw_message_event.png
:scale: 60%
:align: center
Throw Message Event configuration
.. figure:: figures/message_start_event.png
:scale: 60%
:align: center
Message Catch Event configuration
The Throw Message Event Implementation should be 'Expression' and the Expression should
be a Python statement that can be evaluated. In this example, we'll just send the contents
of the :code:`reason_delayed` variable, which contains the response from the 'Investigate Delay'
Task.
We can provide a name for the result variable, but I have not done that here, as it does not
make sense to me for the generator of the event to tell the handler what to call the value.
If you *do* specify a result variable, the message payload (the expression evaluated in the
context of the Throwing task) will be added to the handling task's data in a variable of that
name; if you leave it blank, SpiffWorkflow will create a variable of the form <Handling
Task Name>_Response.
Running the Model
^^^^^^^^^^^^^^^^^
If you have set up our example repository, this model can be run with the
following command:
.. code-block:: console
./camunda-bpmn-runner.py -c order_collaboration \
-d bpmn/tutorial/product_prices.dmn bpmn/tutorial/shipping_costs.dmn \
-b bpmn/camunda/events.bpmn bpmn/camunda/call_activity.bpmn

View File

Before

Width:  |  Height:  |  Size: 158 KiB

After

Width:  |  Height:  |  Size: 158 KiB

View File

Before

Width:  |  Height:  |  Size: 92 KiB

After

Width:  |  Height:  |  Size: 92 KiB

View File

Before

Width:  |  Height:  |  Size: 67 KiB

After

Width:  |  Height:  |  Size: 67 KiB

View File

Before

Width:  |  Height:  |  Size: 34 KiB

After

Width:  |  Height:  |  Size: 34 KiB

View File

Before

Width:  |  Height:  |  Size: 70 KiB

After

Width:  |  Height:  |  Size: 70 KiB

View File

Before

Width:  |  Height:  |  Size: 42 KiB

After

Width:  |  Height:  |  Size: 42 KiB

View File

Before

Width:  |  Height:  |  Size: 109 KiB

After

Width:  |  Height:  |  Size: 109 KiB

View File

@ -0,0 +1,30 @@
MultiInstance Tasks
===================
Earlier versions of SpiffWorkflow relied on the properties available in the Camunda MultiInstance Panel.
.. figure:: figures/multiinstance_task_configuration.png
:scale: 60%
:align: center
MultiInstance Task configuration
SpiffWorkflow has a MultiInstance Task spec in the :code:`camunda` package that interprets these fields
in the following way:
* Loop Cardinality:
- If this is an integer, or a variable that evaluates to an integer, this number would be
used to determine the number of instances
- If this is a collection, the size of the collection would be used to determine the number of
instances
* Collection: the output collection (input collections have to be specified in the "Cardinality" field).
* Element variable: the name of the varible to copy the item into for each instance.
.. warning::
The spec in this package is based on an old version of Camunda, which might or might not have been the
way Camunda uses these fields, and may or may not be similar to newer or current versions.
*Use at your own risk!*

View File

@ -0,0 +1,23 @@
Camunda Editor Support
======================
.. warning:: There is a better way ...
SpiffWorkflow does not aim to support all of Camunda's proprietary extensions.
Many of of the items in the Camunda Properties Panel do not work. And
major features of SpiffWorkflow (Messages, Data Objects, Service Tasks, Pre-Scripts, etc...)
can not be configured in the Camunda editor. Use `SpiffArena <https://www.spiffworkflow.org/posts/articles/get_started/>`_
to build and test your BPMN models instead!
Earlier users of SpiffWorkflow relied heavily on Camunda's modeler and several of our task spec
implementations were based on Camunda's extensions. Support for these extensions has been moved
to the :code:`camunda` package. We are not actively maintaining this package (though we will
accept contributions from Camunda users!). Please be aware that many of the Camunda extensions
that will appear in the Camunda editor do not work with SpiffWorkflow.
.. toctree::
:maxdepth: 3
tasks
events
multiinstance

104
doc/bpmn/camunda/tasks.rst Normal file
View File

@ -0,0 +1,104 @@
Tasks
=====
User Tasks
----------
Creating a User Task
^^^^^^^^^^^^^^^^^^^^
When you click on a user task in the BPMN modeler, the Properties Panel includes a form tab. Use this
tab to build your questions.
The following example shows how a form might be set up in Camumda.
.. figure:: figures/user_task.png
:scale: 30%
:align: center
User Task configuration
Manual Tasks
------------
Creating a Manual Task
^^^^^^^^^^^^^^^^^^^^^^
We can use the BPMN element Documentation field to display more information about the context of the item.
Spiff is set up in a way that you could use any templating library you want, but we have used
`Jinja <https://jinja.palletsprojects.com/en/3.0.x/>`_.
In this example, we'll present an order summary to our customer.
.. figure:: figures/documentation.png
:scale: 30%
:align: center
Element Documentation
Running The Model
-----------------
If you have set up our example repository, this model can be run with the
following command:
.. code-block:: console
./camunda-bpmn-runner.py -p order_product -d bpmn/tutorial/product_prices.dmn -b bpmn/camunda/task_types.bpmn
Example Application Code
------------------------
Handling the User Task
^^^^^^^^^^^^^^^^^^^^^^
.. code:: python
dct = {}
for field in task.task_spec.form.fields:
if isinstance(field, EnumFormField):
option_map = dict([ (opt.name, opt.id) for opt in field.options ])
options = "(" + ', '.join(option_map) + ")"
prompt = f"{field.label} {options} "
option = input(prompt)
while option not in option_map:
print(f'Invalid selection!')
option = input(prompt)
response = option_map[option]
else:
response = input(f"{field.label} ")
if field.type == "long":
response = int(response)
update_data(dct, field.id, response)
DeepMerge.merge(task.data, dct)
The list of form fields for a task is stored in :code:`task.task_spec.form_fields`.
For Enumerated fields, we want to get the possible options and present them to the
user. The variable names of the fields were stored in :code:`field.id`, but since
we set labels for each of the fields, we'd like to display those instead, and map
the user's selection back to the variable name.
For other fields, we'll just store whatever the user enters, although in the case
where the data type was specified to be a :code:`long`, we'll convert it to a
number.
Finally, we need to explicitly store the user-provided response in a variable
with the expected name with :code:`update_data(dct, field.id, response)` and merge
the newly collected data into our task data with :code:`DeepMerge.merge(task.data, dct)`.
Our :code:`update_data` function handles "dot notation" in field names, which creates
nested dictionaries based on the path components.
.. code:: python
def update_data(dct, name, value):
path = name.split('.')
current = dct
for component in path[:-1]:
if component not in current:
current[component] = {}
current = current[component]
current[path[-1]] = value

View File

@ -0,0 +1,133 @@
Implementing a Custom Task Spec
-------------------------------
Suppose we wanted to manage Timer Start Events outside of SpiffWorkflow. If we have a process loaded up and running that
starts with a timer, the timer waits until the event occurs; this might be days or weeks later.
Of course, we can always check that it's waiting and serialize the workflow until that time. However, we might decide that
we don't want SpiffWorkflow to manage this at all. We could do this with a custom task spec.
First we'll create a new class
.. code:: python
from SpiffWorkflow.bpmn.specs.event_definitions import TimerEventDefinition, NoneEventDefinition
from SpiffWorkflow.bpmn.specs.mixins.events.start_event import StartEvent
from SpiffWorkflow.spiff.specs.spiff_task import SpiffBpmnTask
class CustomStartEvent(StartEvent, SpiffBpmnTask):
def __init__(self, wf_spec, bpmn_id, event_definition, **kwargs):
if isinstance(event_definition, TimerEventDefinition):
super().__init__(wf_spec, bpmn_id, NoneEventDefinition(), **kwargs)
self.timer_event = event_definition
else:
super().__init__(wf_spec, bpmn_id, event_definition, **kwargs)
self.timer_event = None
When we create our custom event, we'll check to see if we're creating a Start Event with a TimerEventDefinition, and if so,
we'll replace it with a NoneEventDefinition.
.. note::
Our class inherits from two classes. We import a mixin class that defines generic BPMN Start Event behavior from
:code:`StartEvent` in the :code:`bpmn` package and the :code:`SpiffBpmnTask` from the :code:`spiff` package, which
extends the default :code:`BpmnSpecMixin`.
We've split the basic behavior for specific BPMN tasks from the :code:`BpmnSpecMixin` to make it easier to extend
them without running into MRO issues.
In general, if you implement a custom task spec, you'll need to inherit from bases of both categories.
Whenever we create a custom task spec, we'll need to create a converter for it so that it can be serialized.
.. code:: python
from SpiffWorkflow.bpmn.serializer.workflow import BpmnWorkflowSerializer
from SpiffWorkflow.bpmn.serializer.task_spec import StartEventConverter
from SpiffWorkflow.spiff.serializer.task_spec import SpiffBpmnTaskConverter
from SpiffWorkflow.spiff.serializer.config import SPIFF_SPEC_CONFIG
class CustomStartEventConverter(SpiffBpmnTaskConverter):
def __init__(self, registry):
super().__init__(CustomStartEvent, registry)
def to_dict(self, spec):
dct = super().to_dict(spec)
if spec.timer_event is not None:
dct['event_definition'] = self.registry.convert(spec.timer_event)
else:
dct['event_definition'] = self.registry.convert(spec.event_definition)
return dct
SPIFF_SPEC_CONFIG['task_specs'].remove(StartEventConverter)
SPIFF_SPEC_CONFIG['task_specs'].append(CustomStartEventConverter)
wf_spec_converter = BpmnWorkflowSerializer.configure_workflow_spec_converter(SPIFF_SPEC_CONFIG)
serializer = BpmnWorkflowSerializer(wf_spec_converter)
Our converter will inherit from the :code:`SpiffBpmnTaskConverter`, since that's our base generic BPMN mixin class.
The :code:`SpiffBpmnTaskConverter` ultimately inherits from
:code:`SpiffWorkflow.bpmn.serializer.helpers.task_spec.BpmnTaskSpecConverter`. which provides some helper methods for
extracting standard attributes from tasks; the :code:`SpiffBpmnTaskConverter` does the same for extensions from the
:code:`spiff` package.
We don't have to do much -- all we do is replace the event definition with the original. The timer event will be
moved when the task is restored.
.. note::
It might be better have the class's init method take both the event definition to use *and* the timer event
definition. Unfortunately, our parser is not terribly intuitive or easily extendable, so I've done it this
way to make this a little easier to follow.
When we create our serializer, we need to tell it about this task. We'll remove the converter for the standard Start
Event and add the one we created to the confiuration and create the :code:`workflow_spec_converter` from the updated
config.
.. note::
We have not instantiated our converter class. When we call :code:`configure_workflow_spec_converter` with a
configuration (which is essentially a list of classes, split up into sections for organizational purposes),
*it* instantiates the classes for us, using the same `registry` for every class. At the end of the configuration
if returns this registry, which now knows about all of the classes that will be used for SpiffWorkflow
specifications. It is possible to pass a separately created :code:`DictionaryConverter` preconfigured with
other converters; in that case, it will be used as the base `registry`, to which specification conversions will
be added.
Because we've built up the `registry` in such a way, we can make use of the :code:`registry.convert` and
:code:`registry.restore` methods rather than figuring out how to serialize them. We can use these methods on any
objects that SpiffWorkflow knows about.
See :doc:`advanced` for more information about the serializer.
Finally, we have to update our parser:
.. code:: python
from SpiffWorkflow.spiff.parser.event_parsers import StartEventParser
from SpiffWorkflow.spiff.parser.process import SpiffBpmnParser
from SpiffWorkflow.bpmn.parser.util import full_tag
parser = SpiffBpmnParser()
parser.OVERRIDE_PARSER_CLASSES[full_tag('startEvent')] = (StartEventParser, CustomStartEvent)
The parser contains class attributes that define how to parse a particular element and the class that should be used to
create the task spec, so rather than pass these in as arguments, we create a parser and then update the values it
will use. This is a bit unintuitive, but that's how it works.
Fortunately, we were able to reuse an existing Task Spec parser, which simplifies the process quite a bit.
Having created a parser and serializer, we could replace the ones we pass in the the :code:`SimpleBpmnRunner` with these.
I am going to leave creating a script that makes use of them to readers of this document, as it should be clear enough
how to do.
There is a very simple diagram `bpmn/tutorial/timer_start.bpmn` with the process ID `timer_start` with a Start Event
with a Duration Timer of one day that can be used to illustrate how the custom task works. If you run this workflow
with `spiff-bpmn-runner.py`, you'll see a `WAITING` Start Event; if you use the parser and serializer we just created,
you'll be propmted to complete the User Task immediately.

98
doc/bpmn/data.rst Normal file
View File

@ -0,0 +1,98 @@
Data
====
BPMN Model
----------
We'll be using the following files from `spiff-example-cli <https://github.com/sartography/spiff-example-cli>`_.
- `bpmn-spiff/events <https://github.com/sartography/spiff-example-cli/blob/main/bpmn/tutorial/events.bpmn>`_ workflow
- `bpmn-spiff/call_activity <https://github.com/sartography/spiff-example-cli/blob/main/bpmn/tutorial/call_activity.bpmn>`_ workflow
- `bpmn-spiff/data_output <https://github.com/sartography/spiff-example-cli/blob/main/bpmn/tutorial/data_output.bpmn>`_ workflow
- `product_prices <https://github.com/sartography/spiff-example-cli/blob/main/bpmn/tutorial/product_prices.dmn>`_ DMN table
- `shipping_costs <https://github.com/sartography/spiff-example-cli/blob/main/bpmn/tutorial/shipping_costs.dmn>`_ DMN table
Data Objects
^^^^^^^^^^^^
Data Objects exist at the process level and are not visible in the diagram, but when you create a Data Object
Reference, you can choose what Data Object it points to.
.. figure:: figures/data/data_object_configuration.png
:scale: 50%
:align: center
Configuring a Data Object Reference
When a Data Output association (a line) is drawn from a task to a Data Object Reference, the value is copied
from the task data to the workflow data and removed from the task. If a Data Input Association is created from
a Data Object Reference, the value is temporarily copied into the task data while the task is being executed,
and immediate removed afterwards.
This allows sensitive data to be removed from individual tasks (in our example, the customer's credit card
number). It can also be used to prevent large objects from being repeatedly copied from task to task.
Multiple Data Object References can point to the same underlying data. In our example, we use two references
to the same Data Object to pass the credit card info to both tasks that require it. On the right panel, we can
see that only one data object exists in the process.
.. figure:: figures/data/data_objects.png
:scale: 30%
:align: center
Data objects in a process
If you step through this workflow, you'll see that the card number is not contained in the task data after
the 'Enter Payment Info' has been completed but is available to the 'Charge Customer' task later on.
Running The Model
*****************
If you have set up our example repository, this model can be run with the following command:
.. code-block:: console
./spiff-bpmn-runner.py -c order_collaboration \
-d bpmn/tutorial/product_prices.dmn bpmn/tutorial/shipping_costs.dmn \
-b bpmn/tutorial/events.bpmn bpmn/tutorial/call_activity.bpmn
Data Inputs and Outputs
^^^^^^^^^^^^^^^^^^^^^^^
In complex workflows, it is useful to be able to specify required Data Inputs and Outputs, especially for Call Activities
given that they are external and might be shared across many different processes.
When you add a Data Input to a Call Activity, SpiffWorkflow will check that a variable with that name is available to
be copied into the activity and copy *only* the variables you've specified as inputs. When you add a Data Output,
SpiffWorkflow will copy *only* the variables you've specified from the Call Activity at the end of the process. If any
of the variables are missing, SpiffWorkflow will raise an error.
Our product customization Call Activity does not require any input, but the output of the process is the product
name and quantity. We can add corresponding Data Outputs for those.
.. figure:: figures/data/data_output.png
:scale: 30%
:align: center
Data Outputs in a Call Activity
If you use this version of the Call Activity and choose a product that has customizations, when you inspect the data
after the Call Activity completes, you'll see that the cutomizations have been removed. We won't continue to use this
version of the Call Activity, because we want to preserve all the data.
.. note::
The BPMN spec allows *any* task to have Data Inputs and Outputs. Our modeler does not provide a way to add them to
arbitrary tasks, but SpiffWorkflow will recognize them on any task if they are present in the BPMN XML.
Running The Model
*****************
If you have set up our example repository, this model can be run with the following command:
.. code-block:: console
./spiff-bpmn-runner.py -p order_product \
-d bpmn/tutorial/product_prices.dmn bpmn/tutorial/shipping_costs.dmn \
-b bpmn/tutorial/top_level.bpmn bpmn/tutorial/data_output.bpmn

View File

@ -1,13 +1,14 @@
SpiffWorkflow Exceptions
====================================
========================
Details about the exceptions and exception hierarchy within SpiffWorkflow
SpiffWorkflowException
----------
----------------------
Base exception for all exceptions raised by SpiffWorkflow
ValidationException
----------
-------------------
**Extends**
SpiffWorkflowException
@ -25,7 +26,7 @@ Thrown during the parsing of a workflow.
WorkflowException
--------
-----------------
When an error occurs with a Task Specification (maybe should have been called
a SpecException)
@ -34,13 +35,12 @@ SpiffWorkflowException
**Attributes/Methods**
- **sender**: The TaskSpec - the specific Task, Gateway, etc... that caused the error to happen.
- **task_spec**: The TaskSpec - the specific Task, Gateway, etc... that caused the error to happen.
- **error**: a human readable error message describing the problem.
- **get_task_trace**: Provided a specific Task, will work it's way through the workflow / sub-processes
and call activities to show where an error occurred. Useful if the error happened within a deeply nested structure (where call activities include call activities ....)
WorkflowDataException
------------------
---------------------
When an exception occurs moving data between tasks and Data Objects (including
data inputs and data outputs.)
@ -56,10 +56,16 @@ WorkflowException
- **data_output**: The spec of the output variable
WorkflowTaskException
--------
---------------------
**Extends**
WorkflowException
It will accept the line_number and error_line as arguments - if the
underlying error provided is a SyntaxError it will try to derive this
information from the error.
If this is a name error, it will attempt to calculate a did-you-mean
error_msg.
**Attributes/Methods**
(in addition to the values in a WorkflowException)
@ -70,21 +76,8 @@ WorkflowException
- **line_number** The line number that contains the error
- **offset** The point in the line that caused the error
- **error_line** The content of the line that caused the error.
It will accept the line_number and error_line as arguments - if the
underlying error provided is a SyntaxError it will try to derive this
information from the error.
If this is a name error, it will attempt to calculate a did-you-mean
error_msg.
Unused / Deprecated errors
--------------------
** StorageException **
Deprecated -- Used only by the PrettyXmlSerializer - which is not under active
support.
** DeadMethodCalled **
Something related to WeakMethod -- which doesn't look to be utilized anymore.
- **get_task_trace**: Provided a specific Task, will work it's way through the workflow/sub-processes and
call activities to show where an error occurred. Useful if the error happened within a deeply nested
structure (where call activities include call activities ....)
- **did_you_mean_name_error**: Compares a missing data value with the contents of the data

View File

@ -6,14 +6,14 @@ BPMN Model
We'll be using the following files from `spiff-example-cli <https://github.com/sartography/spiff-example-cli>`_.
- `transaction <https://github.com/sartography/spiff-example-cli/blob/master/bpmn/transaction.bpmn>`_ workflow
- `signal_event <https://github.com/sartography/spiff-example-cli/blob/master/bpmn/signal_event.bpmn>`_ workflow
- `events <https://github.com/sartography/spiff-example-cli/blob/master/bpmn/events.bpmn>`_ workflow
- `call activity <https://github.com/sartography/spiff-example-cli/blob/master/bpmn/call_activity.bpmn>`_ workflow
- `product_prices <https://github.com/sartography/spiff-example-cli/blob/master/bpmn/product_prices.dmn>`_ DMN table
- `shipping_costs <https://github.com/sartography/spiff-example-cli/blob/master/bpmn/shipping_costs.dmn>`_ DMN table
- `transaction <https://github.com/sartography/spiff-example-cli/blob/main/bpmn/tutorial/transaction.bpmn>`_ workflow
- `signal_event <https://github.com/sartography/spiff-example-cli/blob/main/bpmn/tutorial/signal_event.bpmn>`_ workflow
- `events <https://github.com/sartography/spiff-example-cli/blob/main/bpmn/tutorial/events.bpmn>`_ workflow
- `call activity <https://github.com/sartography/spiff-example-cli/blob/main/bpmn/tutorial/call_activity.bpmn>`_ workflow
- `product_prices <https://github.com/sartography/spiff-example-cli/blob/main/bpmn/tutorial/product_prices.dmn>`_ DMN table
- `shipping_costs <https://github.com/sartography/spiff-example-cli/blob/main/bpmn/tutorial/shipping_costs.dmn>`_ DMN table
A general overview of events in BPMN can be found in the :doc:`/intro`
A general overview of events in BPMN can be found in the :doc:`overview`
section of the documentation.
SpiffWorkflow supports the following Event Definitions:
@ -28,11 +28,16 @@ SpiffWorkflow supports the following Event Definitions:
We'll include examples of all of these types in this section.
.. note::
SpiffWorflow can also support Multiple Event definitions, but our modeler does not allow you to create them,
so we will not delve into them further here.
Transactions
^^^^^^^^^^^^
We also need to introduce the concept of a Transaction because certain events
can only be used in that context. A Transaction is essentially a subprocess, but
can only be used in that context. A Transaction is essentially a Subprocess, but
it must fully complete before it affects its outer workflow.
We'll make our customer's ordering process through the point they review their order
@ -46,20 +51,20 @@ only be used in Transactions.
Cancel Events
^^^^^^^^^^^^^
.. figure:: figures/transaction.png
.. figure:: figures/events/transaction.png
:scale: 30%
:align: center
Workflow with a transaction and Cancel Event
Workflow with a Transaction and Cancel Event
We changed our 'Review Order' Task to be a User Task and have added a form, so
that we can give the customer the option of cancelling the order. If the customer
answers 'Y', then the workflow ends normally and we proceed to collecting
payment information.
However, if the user elects to cancel their order, we use a 'Cancel End Event'
instead, which generates a Cancel Event. We can then attach a 'Cancel Boundary
Event' to the Transaction, and execute that path if the event occurs. Instead of
However, if the user elects to cancel their order, we use a Cancel End Event
instead, which generates a Cancel Event. We can then attach a Cancel Boundary
Event to the Transaction, and execute that path if the event occurs. Instead of
asking the customer for their payment info, we'll direct them to a form and ask
them why they cancelled their order.
@ -70,15 +75,15 @@ To run this workflow
.. code-block:: console
./run.py -p order_product \
-d bpmn/product_prices.dmn bpmn/shipping_costs.dmn \
-b bpmn/transaction.bpmn bpmn/call_activity.bpmn
./spiff-bpmn-runner.py -p order_product \
-d bpmn/tutorial/product_prices.dmn bpmn/tutorial/shipping_costs.dmn \
-b bpmn/tutorial/transaction.bpmn bpmn/tutorial/call_activity.bpmn
Signal Events
^^^^^^^^^^^^^
.. figure:: figures/signal_event.png
.. figure:: figures/events/signal_event.png
:scale: 30%
:align: center
@ -91,43 +96,41 @@ Once the charge is placed, the task that provides the option to cancel will
itself be cancelled when the charge event is received.
We'll also need to detect the case that the customer cancels their order and
cancel the charge task if it occurs; we'll use a separate signal for that.
cancel the charge task if it occurs; we'll use a separate Signal for that.
Multiple tasks can catch the same signal event. Suppose we add a Manager role
to our workflow, and allow the Employee to refer unsuccessful charges to the
Multiple tasks can catch the same Signal Event. Suppose we add a Manager role
to our Process, and allow the Employee to refer unsuccessful charges to the
Manager for resolution. The Manager's task will also need to catch the 'Order
Cancelled' signal event.
Cancelled' Signal Event.
Signals are referred to by name.
.. figure:: figures/throw_signal_event.png
:scale: 30%
.. figure:: figures/events/throw_signal_event.png
:scale: 60%
:align: center
Signal Event configuration
.. Terminate Events:
Terminate Events
^^^^^^^^^^^^^^^^
We also added a Terminate Event to the Manager Workflow. A regular End Event
simply marks the end of a path. A Terminate Event will indicate that the
entire workflow is complete and any remaining tasks should be cancelled. Our
entire Process is complete and any remaining tasks should be cancelled. Our
customer cannot cancel an order that has already been cancelled, and we won't ask
them for feedback about it (we know it wasn't completed), so we do not want to
execute either of those tasks.
We'll now modify our workflow to add an example of each of the other types of
events that SpiffWorkflow Supports.
them for feedback about it (we know that is was because we were unable to charge
them for it), so we do not want to execute either of those tasks.
To run this workflow
.. code-block:: console
./run.py -p order_product \
-d bpmn/product_prices.dmn bpmn/shipping_costs.dmn \
-b bpmn/signal_event.bpmn bpmn/call_activity.bpmn
./spiff-bpmn-runner.py -p order_product \
-d bpmn/tutorial/product_prices.dmn bpmn/tutorial/shipping_costs.dmn \
-b bpmn/tutorial/signal_event.bpmn bpmn/tutorial/call_activity.bpmn
We'll now modify our workflow to add an example of each of the other types of
events that SpiffWorkflow supports.
Error Events
^^^^^^^^^^^^
@ -135,7 +138,7 @@ Error Events
Let's turn to our order fulfillment subprocess. Either of these steps could
potentially fail, and we may want to handle each case differently.
.. figure:: figures/events.png
.. figure:: figures/events/events.png
:scale: 30%
:align: center
@ -170,16 +173,16 @@ Escalation Boundary Event.
Both Error and Escalation Events can be optionally associated with a code. Here is
Throw Event for our `product_not_shipped` Escalation.
.. figure:: figures/throw_escalation_event.png
:scale: 30%
.. figure:: figures/events/throw_escalation_event.png
:scale: 60%
:align: center
Throw Escalation Event configuration
Error Event configuration is similar.
If no code is provided in a Catch event, any event of the corresponding type will catch
the event.
If no code is provided in a Catch event, it can be caught by any Escalation with the same
name.
Timer Events
^^^^^^^^^^^^
@ -191,17 +194,21 @@ amount of time before continuing. We can use this as a regular Intermediate Eve
this case, we simply want to notify the customer of the delay while continuing to process
their order, so we use a Non-Interrupting Event.
.. figure:: figures/timer_event.png
:scale: 30%
.. figure:: figures/events/timer_event.png
:scale: 60%
:align: center
Duration Timer Event configuration
We express the duration as a Python :code:`timedelta`. We show the configuration for the Boundary
Event.
We express the duration as an ISO8601 duration.
It is also possible to use a static datetime to trigger an event. It will need to be parseable
as a date by Python.
.. note::
We enclosed the string in quotes, because it is possible to use a variable to determine
how long the timer should wait.
It is also possible to use a static date and time to trigger an event. It will also need to be
specified in ISO8601 format.
Timer events can only be caught, that is waited on. The timer begins implicitly when we
reach the event.
@ -210,44 +217,63 @@ Message Events
^^^^^^^^^^^^^^
In BPMN, Messages are used to communicate across processes. Technically, Messages are not
intended to be used inside a single process, but Spiff does support this use.
intended to be used inside a single Process, but Spiff does support this use.
Messages are similar to signals, in that they are referenced by name, but they have the
additional property that they may contain a payload.
Messages are similar to Signals, in that they are referenced by name, but they have the
additional property that they may contain a payload. The payload is a bit of python code that will be
evaluated against the task data and sent along with the Message. In the corresponding Message Catch
Event or Receive Task, we define a variable name where we'll store the result.
We've added a QA process to our model, which will be initiated whenever an order takes to long
to fulfill. We'll send the reason for the delay in the message.
We've added a QA process to our model, which will be initiated whenever an order takes too long
to fulfill. We'll send the reason for the delay in the Message.
.. note::
Spiff Messages can also optionally use Correlation Keys. The Correlation Key is an expression or set of
expressions that are evaluated against a Message payload to create an additional identifier for associating
messages with Processes.
This example depends on some Camunda-specific features in our implementation; there is
an alternate messaging implementation in the Spiff extensions package, described in
:doc:`spiff-extensions`.
In our example, it is possible that multiple QA processes could be started (the timer event will fire every
two minutes until the order fulfillment process is complete, or more realistically, they could be
investigating many entirely different orders, even if our simple runner does not handle that case).
In this case, the Message name is insufficient, as there will be multiple Processes that can accept
Messages based on the name.
.. figure:: figures/throw_message_event.png
:scale: 30%
.. figure:: figures/events/correlation.png
:scale: 50%
:align: center
Throw Message Event configuration
Defining a correlation key
The Throw Message Event Implementation should be 'Expression' and the Expression should
be a Python statement that can be evaluated. In this example, we'll just send the contents
of the :code:`reason_delayed` variable, which contains the response from the 'Investigate Delay'
Task.
We use the timestamp of the Message creation as a unique key that can be used to distinguish between multiple
QA Processes.
.. figure:: figures/events/throw_message_event.png
:scale: 50%
:align: center
Configuring a message throw event
When we receive the event, we assign the payload to :code:`order_info`.
.. figure:: figures/events/catch_message_event.png
:scale: 50%
:align: center
Configuring a message catch event
The correlation is visible on both the Throw and Catch Events, but it is associated with the message rather
than the tasks themselves; if you update the expression on either event, the changes will appear in both places.
We can provide a name for the result variable, but I have not done that here, as it does not
make sense to me for the generator of the event to tell the handler what to call the value.
If you *do* specify a result variable, the message payload (the expression evaluated in the
context of the Throwing task) will be added to the handling task's data in a variable of that
name; if you leave it blank, SpiffWorkflow will create a variable of the form <Handling
Task Name>_Response.
Running The Model
^^^^^^^^^^^^^^^^^
.. code-block:: console
./run.py -p order_product \
-d bpmn/product_prices.dmn bpmn/shipping_costs.dmn \
-b bpmn/events.bpmn bpmn/call_activity.bpmn
./spiff-bpmn-runner.py -c order_collaboration \
-d bpmn/tutorial/product_prices.dmn bpmn/tutorial/shipping_costs.dmn \
-b bpmn/tutorial/events.bpmn bpmn/tutorial/call_activity.bpmn
.. note::
We're specifying a collaboration rather than a process so that SpiffWorkflow knows that there is more than
one top-level process.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 97 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 133 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 95 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 132 KiB

View File

Before

Width:  |  Height:  |  Size: 17 KiB

After

Width:  |  Height:  |  Size: 17 KiB

View File

Before

Width:  |  Height:  |  Size: 242 KiB

After

Width:  |  Height:  |  Size: 242 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 94 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 65 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 306 KiB

View File

Before

Width:  |  Height:  |  Size: 28 KiB

After

Width:  |  Height:  |  Size: 28 KiB

View File

Before

Width:  |  Height:  |  Size: 29 KiB

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 246 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 192 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

View File

Before

Width:  |  Height:  |  Size: 38 KiB

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 145 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 94 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 102 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 166 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 118 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 142 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 199 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 90 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 59 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 124 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 125 KiB

View File

Before

Width:  |  Height:  |  Size: 46 KiB

After

Width:  |  Height:  |  Size: 46 KiB

View File

Before

Width:  |  Height:  |  Size: 3.6 KiB

After

Width:  |  Height:  |  Size: 3.6 KiB

View File

Before

Width:  |  Height:  |  Size: 264 KiB

After

Width:  |  Height:  |  Size: 264 KiB

View File

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 98 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 107 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 124 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 96 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 172 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 75 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 137 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 144 KiB

View File

@ -13,14 +13,14 @@ method, and we updated our order total calculations to incorporate that cost.
We'll be using the following files from `spiff-example-cli <https://github.com/sartography/spiff-example-cli>`_.
- `gateway_types <https://github.com/sartography/spiff-example-cli/blob/master/bpmn/gateway_types.bpmn>`_ workflow
- `product_prices <https://github.com/sartography/spiff-example-cli/blob/master/bpmn/product_prices.dmn>`_ DMN table
- `shipping_costs <https://github.com/sartography/spiff-example-cli/blob/master/bpmn/shipping_costs.dmn>`_ DMN table
- `gateway_types <https://github.com/sartography/spiff-example-cli/blob/main/bpmn/gateway_types.bpmn>`_ workflow
- `product_prices <https://github.com/sartography/spiff-example-cli/blob/main/bpmn/product_prices.dmn>`_ DMN table
- `shipping_costs <https://github.com/sartography/spiff-example-cli/blob/main/bpmn/shipping_costs.dmn>`_ DMN table
Exclusive Gateway
^^^^^^^^^^^^^^^^^
Exclusive gateways are used when exactly one alternative can be selected.
Exclusive Gateways are used when exactly one alternative can be selected.
Suppose our products are T-shirts and we offer product C in several colors. After
the user selects a product, we check to see it if is customizable. Our default
@ -28,7 +28,7 @@ branch will be 'Not Customizable', but we'll direct the user to a second form
if they select 'C'; our condition for choosing this branch is a simple python
expression.
.. figure:: figures/exclusive_gateway.png
.. figure:: figures/gateways/exclusive_gateway.png
:scale: 30%
:align: center
@ -44,7 +44,7 @@ Parallel Gateway
leave it blank to avoid visual clutter. I've put a description of the
gateway into the ID field instead.
Parallel gateways are used when the subsequent tasks do not need to be completed
Parallel Gateways are used when the subsequent tasks do not need to be completed
in any particular order. The user can complete them in any sequence and the
workflow will wait for all tasks to be finished before advancing.
@ -53,12 +53,25 @@ address first, but they'll need to complete both tasks before continuing.
We don't need to do any particular configuration for this gateway type.
.. figure:: figures/parallel_gateway.png
.. figure:: figures/gateways/parallel_gateway.png
:scale: 30%
:align: center
Parallel Gateway example
Inclusive Gateway
^^^^^^^^^^^^^^^^^
SpiffWorkflow also supports Inclusive Gateways, though we do not have an example of this gateway
type in this tutorial. Inclusive Gateways have conditions on outgoing flows like Exclusive Gateways,
but unlike Exclusive Gateways, multiple paths may be taken if more than one conition is met.
Event-Based Gateway
^^^^^^^^^^^^^^^^^^^
SpiffWorkflow supports Event-Based Gateways, though we do not use them in this tutorial. Event-Based
gateways select an outgoing flow based on an event. We'll discuss events in the next section.
Running The Model
^^^^^^^^^^^^^^^^^
@ -67,7 +80,7 @@ following command:
.. code-block:: console
./run.py -p order_product \
-d bpmn/product_prices.dmn bpmn/shipping_costs.dmn \
-b bpmn/gateway_types.bpmn
./spiff-bpmn-runner.py -p order_product \
-d bpmn/tutorial/product_prices.dmn bpmn/tutorial/shipping_costs.dmn \
-b bpmn/tutorial/gateway_types.bpmn

View File

@ -3,8 +3,7 @@ BPMN Workflows
The basic idea of SpiffWorkflow is that you can use it to write an interpreter
in Python that creates business applications from BPMN models. In this section,
we'll develop a model of an example process and as well as a
simple workflow runner.
we'll develop a model of a reasonably complex process and show how to run it.
We expect that readers will fall into two general categories:
@ -12,8 +11,8 @@ We expect that readers will fall into two general categories:
- Python developers who might not know much about BPMN
This section of the documentation provides an example that (hopefully) serves
the needs of both groups. We will introduce the BPMN elements that SpiffWorkflow
supports and show how to build a simple workflow runner around them.
the needs of both groups. We will introduce some of the more common BPMN
elements and show how to build a simple workflow runner around them.
SpiffWorkflow does heavy-lifting such as keeping track of task dependencies and
states and providing the ability to serialize or deserialize a workflow that
@ -34,20 +33,22 @@ command:
.. code-block:: console
./run.py -p order_product \
-d bpmn/{product_prices,shipping_costs}.dmn \
-b bpmn/{multiinstance,call_activity_multi}.bpmn
./spiff-bpmn-runner.py -c order_collaboration \
-d bpmn/tutorial/{product_prices,shipping_costs}.dmn \
-b bpmn/tutorial/{top_level_multi,call_activity_multi}.bpmn
.. sidebar:: BPMN Runner
For a full description of program options:
The example app provides a utility for running BPMN Diagrams from the command
line that will allow you to introspect a bit on a running process. You
can see the options available by running:
.. code-block:: console
./run.py --help
./spiff-bpmn-runner.py --help
The code in the workflow runner and the models in the bpmn directory of the
repository will be discussed in the remainder of this tutorial.
Supported BPMN Elements
-----------------------
@ -58,8 +59,8 @@ Supported BPMN Elements
gateways
organization
events
data
multiinstance
spiff-extensions
Putting it All Together
-----------------------
@ -76,3 +77,27 @@ Features in More Depth
:maxdepth: 2
advanced
Custom Task Specs
-----------------
.. toctree::
:maxdepth: 2
custom_task_spec
Exceptions
----------
.. toctree::
:maxdepth: 2
errors
Camunda Editor Support
----------------------
.. toctree::
:maxdepth: 2
camunda/support

View File

@ -6,107 +6,138 @@ BPMN Model
We'll be using the following files from `spiff-example-cli <https://github.com/sartography/spiff-example-cli>`_.
- `multiinstance <https://github.com/sartography/spiff-example-cli/blob/master/bpmn/multiinstance.bpmn>`_ workflow
- `call activity multi <https://github.com/sartography/spiff-example-cli/blob/master/bpmn/call_activity_multi.bpmn>`_ workflow
- `product_prices <https://github.com/sartography/spiff-example-cli/blob/master/bpmn/product_prices.dmn>`_ DMN table
- `shipping_costs <https://github.com/sartography/spiff-example-cli/blob/master/bpmn/shipping_costs.dmn>`_ DMN table
- `multiinstance <https://github.com/sartography/spiff-example-cli/blob/main/bpmn/tutorial/top_level_multi.bpmn>`_ workflow
- `call activity multi <https://github.com/sartography/spiff-example-cli/blob/main/bpmn/tutorial/call_activity_multi.bpmn>`_ workflow
- `product_prices <https://github.com/sartography/spiff-example-cli/blob/main/bpmn/tutorial/product_prices.dmn>`_ DMN table
- `shipping_costs <https://github.com/sartography/spiff-example-cli/blob/main/bpmntutorial//shipping_costs.dmn>`_ DMN table
Loop Task
^^^^^^^^^
Suppose we want our customer to be able to select more than one product.
If we knew how many products they would select at the beginning of the workflow, we could
configure 'Select and Customize Product' as a Sequential MultiInstance Task. We would
specify the name of the collection and each iteration of the task would add a new item
to it.
We'll run our 'Select and Customize Product' Call Activity as a Loop Task.
Since we can't know in advance how many products the order, we'll need to modify that
workflow to ask them whether they want to continue shopping and maintain their product
selections in a collection.
First we'll update the Call Activity's model to ask the customer if they would like to continue shopping.
.. figure:: figures/call_activity_multi.png
.. figure:: figures/multiinstance/call_activity_multi.png
:scale: 30%
:align: center
Selecting more than one product
We'll also need to update our element documentation to display all products.
We've also added a *postScript* to the user task. Spiffworkflow provides extensions that allow scripts to be
run before and after tasks. It is often the case that data needs to be manipulated before and after a task.
We could add regular Script Tasks before and after, but diagrams quickly become cluttered with scripts, and
these extensions are intended to alleviate that.
.. figure:: figures/documentation_multi.png
We use a *postScript* to add the current product to a list of products.
.. code:: python
products.append({
'product_name': product_name,
'product_quantity': product_quantity,
'product_color': product_color,
'product_size': product_size,
'product_style': product_style,
'product_price': product_price,
})
We'll use a *preScript* on the first User Task (Select Product and Quantity) to initialize these variables to
:code:`None` each time we execute the task.
Loop Tasks run either a specified number of times or until a completion condition is met. Since we can't
know in advance how many products the customer will select, we'll add :code:`continue_shopping == 'Y'` as a
completion condition. We'll re-run this Call Activity as long as the customer indicates they want to choose
another product. We'll also set up the list of products that we plan on appending to.
We also added a postscript to this activity to delete the customization values so that we won't have to
look at them for the remainder of the workflow.
.. figure:: figures/multiinstance/loop_task.png
:scale: 30%
:align: center
Updated Documentation for 'Review Order'
Call Activity with Loop
.. note::
We also needed to update our Script Task and the Instructions of the Review Order Task to handle an array
of products rather than a single product.
Note that we are using a dot instead of the typical python dictionary access to obtain
the values. Spiff automatically generates such a representation, which simplifies creating the
documentation strings; however regular Python syntax will work as well.
Here is our new script
.. code:: python
order_total = sum([ p['product_quantity'] * p['product_price'] for p in products ]) + shipping_cost
And our order summary
.. code:: python
Order Summary
{% for product in products %}
{{ product['product_name'] }}
Quantity: {{ product['product_quantity'] }}
Price: {{ product['product_price'] }}
{% endfor %}
Shipping Cost: {{ shipping_cost }}
Order Total: {{ order_total }}
Parallel MultiInstance
^^^^^^^^^^^^^^^^^^^^^^
We'll also update our 'Retrieve Product' task and 'Product Not Available' flows to
We'll also update our 'Retrieve Product' Task and 'Product Not Available' flows to
accommodate multiple products. We can use a Parallel MultiInstance for this, since
it does not matter what order our Employee retrieves the products in.
.. figure:: figures/multiinstance_task_configuration.png
.. figure:: figures/multiinstance/multiinstance_task_configuration.png
:scale: 30%
:align: center
MultiInstance task configuration
MultiInstance Task configuration
Spiff will generate a task for each of the items in the collection. Because of the way
SpiffWorkflow manages the data for these tasks, the collection MUST be a dictionary.
We've specified :code:`products` as our Input Collection and :code:`product` as our Input Item. The
Input Collection should be an existing collection. We'll create a task instance for each element of
the collection, and copy the value into the Input Item; this is how we'll access the data of the
element.
Each value in the dictionary will be copied into a variable with the name specified in
the 'Element Variable' field, so you'll need to specify this as well.
.. :code::
.. figure:: figures/multiinstance_form_configuration.png
:scale: 30%
Item: {{product['product_quantity']}} of {{product['product_name']}}
We also specified :code:`availability` as our Output Collection. Since this variable does not exist,
SpiffWorkflow will automatically create it. You can use an existing variable as an Output Collection;
in this case, its contents will be updated with new values. The Output Item will be copied out of the
child task into the Output Collection.
The 'Retrieve Product' task creates :code:`product_available` from the form input.
Since our input is a list, our output will also be a list. It is possible to generate different output
types if you create the output collections before referring to them.
We have to update our gateway condition to handle the list:
.. figure:: figures/multiinstance/availability_flow.png
:scale: 60%
:align: center
MultiInstance form configuration
Gateway Condition
We'll also need to update the form field id so that the results will be added to the
item of the collection rather than the top level of the task data. This is where the
'Element Variable' field comes in: we'll need to change `product_available` to
`product.product_available`, because we set up `product` as our reference to the
current item.
.. figure:: figures/multiinstance_flow_configuration.png
:scale: 30%
:align: center
Product available flow configuration
Finally, we'll need to update our 'No' flow to check all items in the collection for
availability.
.. note::
In our form configuration, we used `product.product_available` but when we reference
it in the flow, we use the standard python dictionary syntax. We can't use that
notation in form fields, so in this case we need to use SpiffWorkflow's dot notation
conversion.
Sequential MultiInstance
^^^^^^^^^^^^^^^^^^^^^^^^
SpiffWorkflow also supports Sequential MultiInstance Tasks for previously defined
collections, or if the loopCardinality is known in advance, although we have not added an
example of this to our workflow.
For more information about MultiInstance Tasks and SpiffWorkflow, see :doc:`/bpmn/advanced`.
SpiffWorkflow also supports Sequential MultiInstance Tasks for collections, or if the loopCardinality
is known in advance, although we have not added an example of this to our workflow. Their configuraiton
is almost idenitcal to the configuration for Parallel MultiInstance Tasks.
Running The Model
^^^^^^^^^^^^^^^^^
If you have set up our example repository, this model can be run with the
following command:
If you have set up our example repository, this model can be run with the following command:
.. code-block:: console
./run.py -p order_product \
-d bpmn/product_prices.dmn bpmn/shipping_costs.dmn \
-b bpmn/multiinstance.bpmn bpmn/call_activity_multi.bpmn
./spiff-bpmn-runner.py -p order_product \
-d bpmn/tutorial/product_prices.dmn bpmn/tutorial/shipping_costs.dmn \
-b bpmn/tutorial/top_level_multi.bpmn bpmn/tutorial/call_activity_multi.bpmn

View File

@ -6,11 +6,11 @@ BPMN Model
We'll be using the following files from `spiff-example-cli <https://github.com/sartography/spiff-example-cli>`_.
- `lanes <https://github.com/sartography/spiff-example-cli/blob/master/bpmn/lanes.bpmn>`_ workflow
- `top_level <https://github.com/sartography/spiff-example-cli/bpmn/top_level.bpmn>`_ workflow
- `call_activity <https://github.com/sartography/spiff-example-cli/blob/master/bpmn/call_activity.bpmn>`_ workflow
- `product_prices <https://github.com/sartography/spiff-example-cli/blob/master/bpmn/product_prices.dmn>`_ DMN table
- `shipping_costs <https://github.com/sartography/spiff-example-cli/blob/master/bpmn/shipping_costs.dmn>`_ DMN table
- `lanes <https://github.com/sartography/spiff-example-cli/blob/main/tutorial/bpmn/lanes.bpmn>`_ workflow
- `top_level <https://github.com/sartography/spiff-example-cli/blob/main/bpmn/tutorial/top_level.bpmn>`_ workflow
- `call_activity <https://github.com/sartography/spiff-example-cli/blob/main/tutorial/bpmn/call_activity.bpmn>`_ workflow
- `product_prices <https://github.com/sartography/spiff-example-cli/blob/main/tutorial/bpmn/product_prices.dmn>`_ DMN table
- `shipping_costs <https://github.com/sartography/spiff-example-cli/blob/main/bpmn/tutorial/shipping_costs.dmn>`_ DMN table
Lanes
^^^^^
@ -18,13 +18,13 @@ Lanes
Lanes are a method in BPMN to distinguish roles for the workflow and who is
responsible for which actions. In some cases this will be different business
units, and in some cases this will be different individuals - it really depends
on the nature of the workflow. Within a BPMN editor, this is done by choosing the
on the nature of the workflow. Within the BPMN editor, this is done by choosing the
'Create pool/participant' option from the toolbar on the left hand side.
We'll modify our workflow to get the customer's payment information and send it
to an employee who will charge the customer and fulfill the order.
.. figure:: figures/lanes.png
.. figure:: figures/organization/lanes.png
:scale: 30%
:align: center
@ -34,27 +34,23 @@ To run this workflow
.. code-block:: console
./run.py -p order_product \
-d bpmn/product_prices.dmn bpmn/shipping_costs.dmn \
-b bpmn/lanes.bpmn
./spiff-bpmn-runner.py -p order_product \
-d bpmn/tutorial/product_prices.dmn bpmn/tutorial/shipping_costs.dmn \
-b bpmn/tutorial/lanes.bpmn
For a simple code example of displaying a tasks lane, see `Handling Lanes`_
Subprocesses
^^^^^^^^^^^^
In general, subprocesses are a way of grouping work into smaller units. This, in
theory, will help us to re-use sections of business logic, but it will also allow
us to treat groups of work as a unit.
Subprocesses allow us to conceptualize a group of tasks as a unit by creating a
mini-workflow inside a task. Subprocess Tasks come in two different flavors: expanded
or collapsed. The difference between the two types is visual rather than functional.
Subprocesses come in two different flavors. In this workflow we see an Expanded
Subprocess. Unfortunately, we can't collapse an expanded subprocess within BPMN.js,
so expanded subprocesses are mainly useful for conceptualizing a group of tasks as
a unit.
It also possible to refer to external subprocesses via a Call Activity Task. This
allows us to 'call' a separate workflow in a different file by referencing the ID of
the called workflow, which can simplify business logic and make it re-usable.
It also possible to refer to external processes via a Call Activity Task. This
allows us to 'call' a separate Process (which might be stored independently of the
Process we're implementing) by referencing the ID of the called Process, which can simplify
business logic and make it re-usable.
We'll expand 'Fulfill Order' into sub tasks -- retrieving the product and shipping
the order -- and create an Expanded Subprocess.
@ -62,11 +58,11 @@ the order -- and create an Expanded Subprocess.
We'll also expand our selection of products, adding several new products and the ability
to customize certain products by size and style in addition to color.
.. figure:: figures/dmn_table_updated.png
:scale: 30%
.. figure:: figures/organization/dmn_table_updated.png
:scale: 60%
:align: center
Updated Product List
Updated product list
.. note::
@ -75,33 +71,32 @@ to customize certain products by size and style in addition to color.
the option of documenting the decisions contained in the table.
Since adding gateways for navigating the new options will add a certain amount of
clutter to our diagram, we'll create a separate workflow around selecting and
customizing products and refer to that in our main workflow.
clutter to our diagram, we'll create a separate workflow for selecting and customizing
products and refer to that in our main workflow.
.. figure:: figures/call_activity.png
.. figure:: figures/organization/call_activity.png
:scale: 30%
:align: center
Subworkflow for product selection
When configuring the subworkflow, we need to make sure the 'CallActivity Type' of the
parent workflow is 'BPMN' and the 'Called Element' matches the ID we assigned in the
subworkflow.
We need to make sure the 'Called Element' matches the ID we assigned in the called Process.
.. figure:: figures/top_level.png
.. figure:: figures/organization/top_level.png
:scale: 30%
:align: center
Parent workflow
Running the Model
^^^^^^^^^^^^^^^^^
.. code-block:: console
./run.py -p order_product \
-d bpmn/product_prices.dmn bpmn/shipping_costs.dmn \
-b bpmn/top_level.bpmn bpmn/call_activity.bpmn
./spiff-bpmn-runner.py -p order_product \
-d bpmn/tutorial/product_prices.dmn bpmn/tutorial/shipping_costs.dmn \
-b bpmn/tutorial/top_level.bpmn bpmn/tutorial/call_activity.bpmn
Example Application Code
------------------------
@ -115,14 +110,16 @@ our sample application, we'll simply display which lane a task belongs to.
.. code:: python
if hasattr(task.task_spec, 'lane') and task.task_spec.lane is not None:
lane = f'[{task.task_spec.lane}]'
else:
lane = ''
def get_task_description(self, task, include_state=True):
The tasks lane can be obtained from :code:`task.task_spec.lane`. Not all tasks
will have a :code:`lane` attribute, so we need to check to make sure it exists
before attempting to access it (this is true for many task attributes).
task_spec = task.task_spec
lane = f'{task_spec.lane}' if task_spec.lane is not None else '-'
name = task_spec.bpmn_name if task_spec.bpmn_name is not None else '-'
description = task_spec.description if task_spec.description is not None else 'Task'
state = f'{task.get_state_name()}' if include_state else ''
return f'[{lane}] {name} ({description}: {task_spec.bpmn_id}) {state}'
See the Filtering Tasks Section of :doc:`advanced` more information
about working with lanes in Spiff.
The tasks lane can be obtained from :code:`task.task_spec.lane`, which will be :code:`None`
if the task is not part of a lane.
See the Filtering Tasks Section of :doc:`advanced` more information about working with lanes in Spiff.

View File

@ -11,39 +11,41 @@ BPMN and SpiffWorkflow
resources. We have used the `books by Bruce Silver <https://www.amazon.com/Bruce-Silver/e/B0062AXUFY/ref=dp_byline_cont_pop_book_1>`_
as a guide for our BPMN modeling.
.. image:: figures/bpmnbook.jpg
.. image:: figures/overview/bpmnbook.jpg
:align: center
Business Process Model and Notation (BPMN) is a diagramming language for
specifying business processes. BPMN links the realms of business and IT, and
creates a common process language that can be shared between the two.
Business Process Model and Notation (BPMN) is a diagramming language for specifying business
processes. BPMN links the realms of business and IT, and creates a common process language that
can be shared between the two.
BPMN describes details of process behaviors efficiently in a diagram. The
meaning is precise enough to describe the technical details that control
process execution in an automation engine. SpiffWorkflow allows you to create
code to directly execute a BPMN diagram.
BPMN describes details of process behaviors efficiently in a diagram. The meaning is precise enough
to describe the technical details that control process execution in an automation engine.
SpiffWorkflow allows you to create code to directly execute a BPMN diagram.
When using SpiffWorkflow, a client can manipulate the BPMN diagram and still
have their product work without a need for you to edit the Python code,
improving response and turnaround time.
Today, nearly every process modeling tool supports BPMN in some fashion making
it a great tool to learn and use.
To use SpiffWorkflow, you need at least a basic understanding of BPMN.
This page offers a brief overview. There are many resources for additional
information about BPMN.
When using SpiffWorkflow, a client can manipulate the BPMN diagram and still have their product work
without a need for you to edit the Python code, improving response and turnaround time.
.. sidebar:: BPMN Modelers
There are a number of modelers in existence, and any BPMN compliant modeler should work.
SpiffWorkflow has some basic support for the free Camunda modeler, to use it's form building
capabilities, but we intend to encapsulate this support in an extension module and remove
it from the core library eventually. It does help for making some examples and demonstrating
how one might implement user tasks in an online environment.
In these examples and throughout the documentation we use the
`BPMN.js <https://bpmn.io/toolkit/bpmn-js/>`_ BPMN Modeler.
Currently the best way to build BPMN diagrams is through our SpiffArena project
which provides a custom BPMN Modeler, along with ways to test and run BPMN diagrams
from within a web browser. Please see our `getting started guide <https://www.spiffworkflow.org/posts/articles/get_started/>`_
for more information.
It is also possible to use version 7 of the Camunda Modeler to create BPMN diagrams.
However, be cautious of the properies panel settings, as many of these settings are
not a part of the BPMN Standard, and are not handled in the same way within SpiffWorkflow.
You can download the Camunda Modeler from `Camunda <https://camunda.com/download/modeler/>`_.
Today, nearly every process modeling tool supports BPMN in some fashion making it a great tool to
learn and use. This page provides a brief overview, and the following section provides a more
in-depth look. There are many resources for additional information about BPMN.
Most of the examples in this guide have been created with
`our modeler <https://github.com/sartography/bpmn-js-spiffworkflow>`_, which is based on
`bpmn.js <https://bpmn.io/toolkit/bpmn-js/>`_.
A Simple Workflow
@ -56,7 +58,7 @@ by a single thick border circle.
The following example also has one task, represented by the rectangle with curved corners.
.. figure:: figures/simplestworkflow.png
.. figure:: figures/overview/simplestworkflow.png
:scale: 25%
:align: center
@ -70,7 +72,7 @@ the tail of a sequence flow completes, the node at the arrowhead is enabled to s
A More Complicated Workflow
---------------------------
.. figure:: figures/ExclusiveGateway.png
.. figure:: figures/overview/ExclusiveGateway.png
:scale: 25%
:align: center
@ -85,27 +87,36 @@ the other based on some data condition. BPMN has other gateway types.
The important point is that we can use a gateway to add a branch in the
workflow **without** creating an explicit branch in our Python code.
Events
An Even More Complicated Workflow
------
BPMN is a rich language that can describe many different types of processes. In
the following pages we'll cover lanes (a way to distribute work across different
roles) events (a way to handle asynchronous events), multi-instance tasks (that
can be executed many times in parallel or in sequence) and decomposition (the
many ways you can interconnect diagrams to build larger more complex processes)
We are just scratching the surface. For now let's take one more step and look
at what Events make possible.
Events
^^^^^^^
In the above simple workflows, all of the transitions are deterministic and we
have direct connections between tasks. We need to handle the cases where an event
may or may not happen and link these events in different parts of the workflow.
may or may not happen, and link these events in different parts of the workflow or
across different workflows.
BPMN has a comprehensive suite of event elements that can used for this purpose.
SpiffWorkflow does not support every single BPMN event type, but it can handle
many of them.
BPMN has a comprehensive suite of event elements. SpiffWorkflow does not support
every single BPMN event type, but it can handle many of them.
.. figure:: figures/events.png
.. figure:: figures/overview/events.png
:scale: 25%
:align: center
A workflow containing events
We've already seen plain Start and End Events. BPMN also include the concepts
We've already seen plain Start and End Events. BPMN also includes the concept
of Intermediate Events (standalone events that may be Throwing or Catching) as well
as Boundary Events (which can only be Caught).
as Boundary Events (which are exclusively Catching).
All Start Events are inherently Catching Events (a workflow can be initiated if a
particular event occurs) and all End Events are Throwing Events (they can convey

View File

@ -1,112 +0,0 @@
Spiff Extensions
================
BPMN Model
----------
We'll be using the following files from `spiff-example-cli <https://github.com/sartography/spiff-example-cli>`_.
- `bpmn-spiff/events <https://github.com/sartography/spiff-example-cli/blob/master/bpmn-spiff/events.bpmn>`_ workflow
- `bpmn-spiff/call activity <https://github.com/sartography/spiff-example-cli/blob/master/bpmn-spiff/call_activity.bpmn>`_ workflow
- `product_prices <https://github.com/sartography/spiff-example-cli/blob/master/bpmn/product_prices.dmn>`_ DMN table
- `shipping_costs <https://github.com/sartography/spiff-example-cli/blob/master/bpmn/shipping_costs.dmn>`_ DMN table
We'll also be using the `run-spiff.py <https://github.com/sartography/spiff-example-clie/blob/master/run-spiff.py>`_ script
instead of the `run.py <https://github.com/sartography/spiff-example-clie/blob/master/run.py>`_ script
Camunda's BPMN editor does not handle data objects in the expected way. You can create data object
references, but there is no way to re-use data objects.
It also does not support Message Correlations, and the interface for generating a message payload doesn't work
well in a Python environment.
We have extended BPMN.js to correct some of these issues. The examples in this section were created using our
custom BPMN editor, `bpmn-js-spiffworkflow <https://github.com/sartography/bpmn-js-spiffworkflow>`_.
Data Objects
^^^^^^^^^^^^
Data objects exist at a process level and are not visible in the diagram, but when you create a data object
reference, you can choose what data object it points to.
.. figure:: figures/data_object_configuration.png
:scale: 50%
:align: center
Configuring a data object reference
When a data output association (a line) is drawn from a task to a data object reference, the value is copied
from the task data to the workflow data and removed from the task. If a data input association is created from
a data object reference, the value is temporarily copied into the task data while the task is being executed,
and immediate removed afterwards.
This allows sensitive data to be removed from individual tasks (in our example, the customer's credit card
number). It can also be used to prevent large objects from being repeatedly copied from task to task.
Multiple data object references can point to the same underlying data. In our example, we use to references
to the same data object to pass the credit card info to both tasks that require it. On the right panel, we can
see that only one data object exists in the process.
.. figure:: figures/data_objects.png
:scale: 30%
:align: center
Data objects in a process
If you step through this workflow, you'll see that the card number is not contained in the task data after
the 'Enter Payment Info' has been completed.
Configuring Messages
^^^^^^^^^^^^^^^^^^^^
Messages are handled slightly differently in Spiff Message Events. On a Message Throw Event or Send Task,
we define a payload, which is simply a bit of python code that will be evaluated against the task data and
sent along with the message. In the corresponding Message Catch Event or Receive Task, we define a
variable name where we'll store the result.
Spiff Messages can also optionally use correlation keys. The correlation key is an expression or set of
expressions that are evaluated against a message payload to create an additional identifier for associating
messages with processes.
In our example, it is possible that multiple QA processes could be started (the timer event will fire every
minute until the order fulfillment process is complete). In this case, the message name is insufficient, as
there will be multiple processes that can accept messages based on the name.
.. figure:: figures/correlation.png
:scale: 50%
:align: center
Defining a correlation key
We use the timestamp of the message creation as a unique key that can be used to distinguish between multiple
QA processes.
.. figure:: figures/spiff_message_throw.png
:scale: 50%
:align: center
Configuring a message throw event
When we receive the event, we assign the payload to :code:`order_info`.
.. figure:: figures/spiff_message_catch.png
:scale: 50%
:align: center
Configuring a message catch event
The correlation is visible on both the Throw and Catch Events, but it is associated with the message rather
than the tasks themselves; if you update the expression on either event, the changes will appear in both places.
Running The Model
^^^^^^^^^^^^^^^^^
If you have set up our example repository, this model can be run with the
following command:
.. code-block:: console
./run-spiff.py -p order_product \
-d bpmn/product_prices.dmn bpmn/shipping_costs.dmn \
-b bpmn-spiffevents.bpmn bpmn-spiff/call_activity.bpmn

View File

@ -4,237 +4,318 @@ Putting it All Together
In this section we'll be discussing the overall structure of the workflow
runner we developed in `spiff-example-cli <https://github.com/sartography/spiff-example-cli>`_.
Our example application contains two different workflow runners, one that uses tasks with
Camunda extensions
(`run.py <https://github.com/sartography/spiff-example-cli/blob/main/run.py>`_) and one
that uses tasks with Spiff extensions
(`run-spiff.py <https://github.com/sartography/spiff-example-cli/blob/main/run.py>`_).
Our example application contains two different workflow runners, one that uses tasks with with Spiff extensions
(`spiff-bpmn-runner.py <https://github.com/sartography/spiff-example-cli/blob/main/spiff-bpmn-runner.py>`_)
and one that uses the **deprecated** Camunda extensions
(`camunda-bpmn-runner.py <https://github.com/sartography/spiff-example-cli/blob/main/camunda-bpmn-runner.py>`_).
Most of the workflow operations will not change, so shared functions are defined in
`utils.py <https://github.com/sartography/spiff-example-cli/blob/main/utils.py>`_.
The primary differences between the two are in handling User and MultiInstance Tasks. We have some documentation
about how we interpret Camunda forms in :doc:`camunda/tasks`. That particular page comes from an earlier version of
our documentation, and `camunda-bpmn-runner.py` can run workflows with these tasks. However, we are not actively
maintaining the :code:`camunda` package, and it should be considered deprecated.
The primary difference is handling user tasks. Spiff User Tasks define an extensions
property that stores a filename containing a JSON schema used to define a web form. We
use `react-jsonschema-form <https://react-jsonschema-form.readthedocs.io/en/latest/>`_
to define our forms. This doesn't necessarily make a lot of sense in terms of a command
line UI, so we'll focus on the Camunda workflow runner in this document.
Base Application Runner
-----------------------
Loading a Workflow
-------------------
The core functions your application will have to accomodate are
The :code:`CamundaParser` extends the base :code:`BpmnParser`, adding functionality for
parsing forms defined in Camunda User Tasks and decision tables defined in Camunda
Business Rule Tasks. (There is a similar :code:`SpiffBpmnParser` used by the alternate
runner.)
* parsing workflows
* serializing workflows
* running workflows
We create the parser and use it to load our workflow.
Task specs define how tasks are executed, and creating the task specs depends on a parser which initializes a spec of
the appropriate class. And of course serialization is also heavily dependent on the same information needed to create
the instance. To that end, our BPMN runner requires that you provide a parser and serializer; it can't operate unless
it knows what to do with each task spec it runs across.
Here is the initialization for the :code:`runner.SimpleBpmnRunner` class that is used by both scripts.
.. code:: python
parser = CamundaParser()
wf = parse_workflow(parser, args.process, args.bpmn, args.dmn)
def __init__(self, parser, serializer, script_engine=None, handlers=None):
Our workflow parser looks like this;
self.parser = parser
self.serializer = serializer
self.script_engine = script_engine
self.handlers = handlers or {}
self.workflow = None
If you read the introduction to BPMN, you'll remember that there's a Script Task; the script engine executes scripts
against the task data and updates it. Gateway conditions are also evaluated against the same context by the engine.
SpiffWorkflow provides a default scripting environment that is suitable for simple applications, but a serious application
will probably need to extend (or restrict) it in some way. See :doc:`advanced` for a few examples. Therefore, we have the
ability to optionally pass one in.
The `handlers` argument allows us to let our application know what to do with specific task spec types. It's a mapping
of task spec class to its handler. Most task specs won't need handlers outside of how SpiffWorkflow executes them
(that's probably why you are using this library). You'll only have to be concerned with the task spec types that
require human interaction; Spiff will not handle those for you. In your application, these will probably be built into
it and you won't need to pass anything in.
However, here we're trying to build something flexible enough that it can at least deal with two completely different
mechanisms for handling User Tasks, and provide a means for you to experiment with this application.
Parsing Workflows
-----------------
Here is the method we use to parse the workflows;
.. code:: python
def parse_workflow(parser, process, bpmn_files, dmn_files, load_all=True):
parser.add_bpmn_files(bpmn_files)
def parse(self, name, bpmn_files, dmn_files=None, collaboration=False):
self.parser.add_bpmn_files(bpmn_files)
if dmn_files:
parser.add_dmn_files(dmn_files)
top_level = parser.get_spec(process)
if load_all:
subprocesses = parser.find_all_specs()
self.parser.add_dmn_files(dmn_files)
if collaboration:
top_level, subprocesses = self.parser.get_collaboration(name)
else:
subprocesses = parser.get_subprocess_specs(process)
return BpmnWorkflow(top_level, subprocesses, script_engine=CustomScriptEngine)
top_level = self.parser.get_spec(name)
subprocesses = self.parser.get_subprocess_specs(name)
self.workflow = BpmnWorkflow(top_level, subprocesses, script_engine=self.script_engine)
We'll obtain the workflow specification from the parser for the top level process
using :code:`parser.get_spec()`.
We add the BPMN and DMN files to the parser and use :code:`parser.get_spec` to create a workflow spec for a process
model.
We have two options for finding subprocess specs. The method :code:`parser.find_all_specs()`
will create specs for all executable processes found in every file supplied. The method
:code:`parser.get_subprocess_specs(process)` will create specs only for processes used by
the specified process. Both search recursively for subprocesses; the only difference is
the latter method limits the search start to the specified process.
SpiffWorkflow needs at least one spec to create a workflow; this will be created from the name of the process passed
into the method. It also needs specs for any subprocesses or call activities. The method
:code:`parser.get_subprocess_specs` will search recursively through a starting spec and collect specs for all
referenced resources.
Our examples are pretty simple, and we're not loading any extraneous stuff, so we'll
just always load everything. If your entire workflow is contained in your top-level
process, you can omit the :code:`subprocess` argument, but if your workflow contains
call activities, you'll need to use one of these methods to find the models for any
called processes.
It is possible to have two processes defined in a single model, via a Collaboration. In this case, there is no "top
level spec". We can use :code:`self.parser.get_collaboration` to handle this case.
We also provide an enhanced script engine to our workflow. More information about how and
why you might want to do this is covered in :doc:`advanced`. The :code:`script_engine`
argument is optional and the default will be used if none is supplied.
.. note::
We return :code:`BpmnWorkflow` that runs our top-level workflow and contains specs for any
subprocesses defined by that workflow.
The only required argument to :code:`BpmnWorkflow` is a single workflow spec, in this case `top_level`. The
parser returns an empty dict if no subprocesses are present, but it is not required to pass this in. If there
are subprocess present, `subprocess_specs` will be a mapping of process ID to :code:`BpmnWorkflowSpec`.
In :code:`simple_bpmn_runner.py` we create the parser like this:
.. code:: python
from SpiffWorkflow.spiff.parser.process import SpiffBpmnParser, BpmnValidator
parser = SpiffBpmnParser(validator=BpmnValidator())
The validator is an optional argument, which can be used to validate the BPMN files passed in. The :code:`BpmnValidator`
in the :code:`spiff` package is configured to validate against the BPMN 2.0 spec and our spec describing our own
extensions.
The parser we imported is pre-configured to create task specs that know about Spiff extensions.
There are parsers in both the :code:`bpmn` and :code:`camunda` packages that can be similarly imported. There is a
validator that uses only the BPMN 2.0 spec in the :code:`bpmn` package (but no similar validator for Camunda).
It is possible to override particular task specs for specific BPMN Task types. We'll cover an example of this in
:doc:`advanced`.
Serializing Workflows
---------------------
In addition to the pre-configured parser, each package has a pre-configured serializer.
.. code:: python
from SpiffWorkflow.spiff.serializer.config import SPIFF_SPEC_CONFIG
from runner.product_info import registry
wf_spec_converter = BpmnWorkflowSerializer.configure_workflow_spec_converter(SPIFF_SPEC_CONFIG)
serializer = BpmnWorkflowSerializer(wf_spec_converter, registry)
The serializer has two components:
* the `workflow_spec_converter`, which knows about objects inside SpiffWorkflow
* the `registry`, which can tell SpiffWorkflow how to handle arbitrary data from your scripting environment
(required only if you have non-JSON-serializable data there).
We discuss the creation and use of `registry` in :doc:`advanced` so we'll ignore it for now.
`SPIFF_SPEC_CONFIG` has serialization methods for each of the task specs in its parser and we can create a
converter from it directly and pass it into our serializer.
Here is our deserialization code:
.. code:: python
def restore(self, filename):
with open(filename) as fh:
self.workflow = self.serializer.deserialize_json(fh.read())
if self.script_engine is not None:
self.workflow.script_engine = self.script_engine
We'll just pass the contents of the file to the serializer and it will restore the workflow. The scripting environment
was not serialized, so we have to make sure we reset it.
And here is our serialization code:
.. code:: python
def dump(self):
filename = input('Enter filename: ')
with open(filename, 'w') as fh:
dct = self.serializer.workflow_to_dict(self.workflow)
dct[self.serializer.VERSION_KEY] = self.serializer.VERSION
fh.write(json.dumps(dct, indent=2, separators=[', ', ': ']))
The serializer has a companion method :code:`serialize_json` but we're bypassing that here so that we can make the
output readable.
The heart of the serialization process actually happens in :code:`workflow_to_dict`. This method returns a
dictionary representation of the workflow that contains only JSON-serializable items. All :code:`serialize_json`
does is add a serializer version and call :code:`json.dumps` on the returned dict. If you are developing a serious
application, it is unlikely you want to store the entire workflow as a string, so you should be aware that this method
exists.
The serializer is fairly complex: not only does it need to handle SpiffWorkflow's own internal objects that it
knows about, it needs to handle arbitrary Python objects in the scripting environment. The serializer is covered in
more depth in :doc:`advanced`.
Defining Task Handlers
----------------------
In :code:`run.py`, we define the function :code:`complete_user_task`. This has code specific
to Camunda User Task specs (in :code:`run-spiff.py`, we do something different).
In :code:`spiff-bpmn-runner.py`, we also define the functions :code:`complete_user_task`. and
:code:`complete_manual_task`.
We also import the shared function :code:`complete_manual_task` for handling Manual
Tasks as there is no difference.
We went over these handlers in :doc:`tasks`, so we won't delve into them here.
We create a mapping of task type to handler, which we'll pass to our workflow runner.
.. code:: python
handlers = {
ManualTask: complete_manual_task,
UserTask: complete_user_task,
ManualTask: complete_manual_task,
NoneTask: complete_manual_task,
}
This might not be a step you would need to do in an application you build, since
you would likely have only one set of task specs that need to be parsed, handled, and
serialized; however, our `run` method is an awful lot of code to maintain in two separate
files.
In SpiffWorkflow the :code:`NoneTask` (which corresponds to the `bpmn:task` is treated as a human task, and therefore
has no built in way of handling them. Here we treat them as if they were Manual Tasks.
Running a Workflow
------------------
Running Workflows
-----------------
This is our application's :code:`run` method.
We pass our workflow, the task handlers, a serializer (creating a serializer is covered in
more depth in :doc:`advanced`).
The :code:`step` argument is a boolean that indicates whether we want the option of seeing
a more detailed representation of the state at each step, which we'll discuss in the
section following this one. The :code:`display_types` argument controls what types of
tasks should be included in a detailed list when stepping through a process.
Our application's :code:`run_workflow` method takes one argument: `step` is a boolean that lets the runner know
if if should stop and present the menu at every step (if :code:`True`) or only where there are human tasks to
complete.
.. code:: python
def run(workflow, task_handlers, serializer, step, display_types):
def run_workflow(self, step=False):
workflow.do_engine_steps()
while not self.workflow.is_completed():
while not workflow.is_completed():
if not step:
self.advance()
ready_tasks = workflow.get_ready_user_tasks()
options = { }
print()
for idx, task in enumerate(ready_tasks):
option = format_task(task, False)
options[str(idx + 1)] = task
print(f'{idx + 1}. {option}')
tasks = self.workflow.get_tasks(TaskState.READY|TaskState.WAITING)
runnable = [t for t in tasks if t.state == TaskState.READY]
human_tasks = [t for t in runnable if t.task_spec.manual]
current_tasks = human_tasks if not step else runnable
selected = None
while selected not in options and selected not in ['', 'D', 'd']:
selected = input('Select task to complete, enter to wait, or D to dump the workflow state: ')
self.list_tasks(tasks, 'Ready and Waiting Tasks')
if len(current_tasks) > 0:
action = self.show_workflow_options(current_tasks)
else:
action = None
if len(tasks) > 0:
input("\nPress any key to update task list")
if selected.lower() == 'd':
filename = input('Enter filename: ')
state = BpmnSerializer().serialize_workflow(workflow, include_spec=True)
with open(filename, 'w') as dump:
dump.write(state)
elif selected != '':
next_task = options[selected]
handler = task_handlers.get(type(next_task.task_spec))
In the code above we first get the list of all `READY` or `WAITING` tasks; these are the currently active tasks.
`READY` tasks can be run, and `WAITING` tasks may change to `READY` (see :doc:`../concepts` for a discussion of task
states). We aren't going to do anything with the `WAITING` tasks except display them.
We can further filter our runnable tasks on the :code:`task_spec.manual` attribute. If we're stepping though the
workflow, we'll present the entire list; otherwise only the human tasks. There are actually many points where no
human tasks are available to execute; the :code:`advance` method runs the other runnable tasks if we've opted to
skip displaying them; we'll look at that method after this one.
There may also be points where there are no runnable tasks at all (for example, if the entire process is waiting
on a timer). In that case, we'll do nothing until the user indicates we can proceeed (the timer will fire
regardless of what the user does -- we're just preventing this loop from executing repeatedly when there's nothing
to do).
.. code:: python
if action == 'r':
task = self.select_task(current_tasks)
handler = self.handlers.get(type(task.task_spec))
if handler is not None:
handler(next_task)
next_task.complete()
handler(task)
task.run()
workflow.refresh_waiting_tasks()
workflow.do_engine_steps()
if step:
print_state(workflow, next_task, display_types)
In the code above, we present a menu of runnable tasks to the user and run the one they chose, optionally
calling one of our handlers.
print('\nWorkflow Data')
print(json.dumps(workflow.data, indent=2, separators=[ ', ', ': ' ]))
Each task has a `data` attribute, which can by optionally updated when the task is `READY` and before it is
run. The task `data` is just a dictionary. Our handler modifies the task data if necessary (eg adding data
collected from forms), and :code:`task.run` propogates the data to any tasks following it, and changes its state to
one of the `FINISHED` states; nothing more will be done with this task after this point.
The first line of this function is the one that does the bulk of the work in
SpiffWorkflow. Calling :code:`workflow.do_engine_steps()` causes Spiff to repeatedly
look for and execute any engine tasks that are ready.
An **engine task** does not require user interaction. For instance, it could be
a Script task or selection of a flow from a gateway. Execution will
stop when only interactive tasks remain or the workflow is completed.
A SpiffWorkflow application will call :code:`workflow.do_engine_steps()` to start the
workflow and then enter a loop that will
- check for ready user tasks
- present the tasks to the user to complete
- complete the tasks
- refresh any waiting tasks
- complete any engine tasks that have been reached via user interactions
until the workflow completes.
When a workflow completes, the task data (just a dictionary passed from one task to the
next, and optionally modified by each task) is copied into the workflow data. We display
the end state of the workflow on completion.
The rest of the code is all about presenting the tasks to the user and dumping the
workflow state. We've covered former in the BPMN Elements section of :doc:`index`
and will cover the latter in :doc:`advanced`.
Handling task presentation is what **you** will be developing when you use SpiffWorkflow.
Examining the Workflow State
----------------------------
When this application is run and we want to present steps to the user, we'll need
to be able to examine the workflow and task states and associated data. We'll cover
the basics of this in this section.
The code below is a simple method for displaying information about a task. We use
this in two ways
- presenting a list of tasks to a user (in this case the state will always be ready, so we won't include it)
- presenting the state of each task while stepping through the workflow (in this case you most likely do want to know the state).
We'll skip over most of the options in :code:`run_workflow` since they are pretty straightforward.
.. code:: python
def format_task(task, include_state=True):
if hasattr(task.task_spec, 'lane') and task.task_spec.lane is not None:
lane = f'[{task.task_spec.lane}]'
else:
lane = ''
state = f'[{task.get_state_name()}]' if include_state else ''
return f'{lane} {task.task_spec.description} ({task.task_spec.name}) {state}'
self.workflow.refresh_waiting_tasks()
We previously went over obtaining the lane information in :doc:`organization`.
At the end of each iteration, we call :code:`refresh_waiting_tasks` to ensure that any currently `WAITING` tasks
will move to `READY` if they are able to do so.
We can call :code:`task.get_state_name()` to get a human-readable representation of
a task's state.
We store the value provided in the :code:`name` attribute of the task (the text
entered in the 'Name' field in our sample models) in :code:`task.task_spec.description`.
Here is the code we use for examining the workflow state.
After the workflow finishes, we'll give the user a few options for looking at the end state.
.. code:: python
def print_state(workflow, task, display_types):
while action != 'q':
action = self.show_prompt('\nSelect action: ', {
'a': 'List all tasks',
'v': 'View workflow data',
'q': 'Quit',
})
if action == 'a':
self.list_tasks([t for t in self.workflow.get_tasks() if t.task_spec.bpmn_id is not None], "All Tasks")
elif action == 'v':
dct = self.serializer.data_converter.convert(self.workflow.data)
print('\n' + json.dumps(dct, indent=2, separators=[', ', ': ']))
print('\nLast Task')
print(format_task(task))
print(json.dumps(task.data, indent=2, separators=[ ', ', ': ' ]))
Note that we're filtering the task lists with :code:`t.task_spec.bpmn_id is not None`. The workflow contains
tasks other than the ones visible on the BPMN diagram; these are tasks that SpiffWorkflow uses to manage execution
and we'll omit them from the displays. If a task is visible on a diagram it will have a non-null value for its
`bpmn_id` attribute (because all BPMN elements require IDs), otherwise the value will be :code:`None`. See
:doc:`advanced` for more information about BPMN task spec attributes.
all_tasks = [ task for task in workflow.get_tasks() if isinstance(task.task_spec, display_types) ]
upcoming_tasks = [ task for task in all_tasks if task.state in [TaskState.READY, TaskState.WAITING] ]
When a workflow completes, the task data from the "End" task, which has built up through the operation of the
workflow, is copied into the workflow data, so we want to give the option to display this end state. We're using
the serializer's `data_converter` to handle the workflow data (the `registry`) we passed in earlier, because
it may contain arbitrary data.
print('\nUpcoming Tasks')
for idx, task in enumerate(upcoming_tasks):
print(format_task(task))
Let's take a brief look at the advance method:
if input('\nShow all tasks? ').lower() == 'y':
for idx, task in enumerate(all_tasks):
print(format_task(task))
.. code:: python
We'll print information about our task as described above, as well as a dump of its data.
def advance(self):
engine_tasks = [t for t in self.workflow.get_tasks(TaskState.READY) if not t.task_spec.manual]
while len(engine_tasks) > 0:
for task in engine_tasks:
task.run()
self.workflow.refresh_waiting_tasks()
engine_tasks = [t for t in self.workflow.get_tasks(TaskState.READY) if not t.task_spec.manual]
We can get a list of all tasks regardless of type or state with :code:`workflow.get_tasks()`.
This method is really just a condensed version of :code:`run_workflow` that ignore human tasks and doesn't need to
present a menu. We use it to get to a point in our workflow where there are only human tasks left to run.
The actual list of tasks will get quite long (some tasks are expanded internally by Spiff into
multiple tasks, and all gateways and events are also treated as "tasks"). So we're filtering
the tasks to only display the ones that would have salience to a user here.
In general, an application that uses SpiffWorkflow will use these methods as a template. It will consist of a
loop that:
* runs any `READY` engine tasks (where :code:`task_spec.manual == False`)
* presents `READY` human tasks to users (if any)
* updates the human task data if necessary
* runs the human tasks
* refreshes any `WAITING` tasks
until there are no tasks left to complete.
The rest of the code is all about presenting the tasks to the user and dumping the workflow state. These are the
parts that you'll want to customize in your own application.
We'll further filter those tasks for :code:`READY` and :code:`WAITING` tasks for a more
compact display, and only show all tasks when explicitly called for.

View File

@ -4,46 +4,46 @@ Tasks
BPMN Model
----------
In this example, we'll model a customer selecting a product to illustrate
the basic task types that can be used with SpiffWorkflow.
In this example, we'll model a customer selecting a product to illustrate the basic task types that
can be used with SpiffWorkflow.
We'll be using the following files from `spiff-example-cli <https://github.com/sartography/spiff-example-cli>`_.
We'll be using the following files from `spiff-example-cli <https://github.com/sartography/spiff-example-cli>`_:
- `task_types <https://github.com/sartography/spiff-example-cli/blob/master/bpmn/task_types.bpmn>`_ workflow
- `product_prices <https://github.com/sartography/spiff-example-cli/blob/master/bpmn/product_prices.dmn>`_ DMN table
- `task_types <https://github.com/sartography/spiff-example-cli/blob/main/bpmn/tutorial/task_types.bpmn>`_ workflow
- `product_prices <https://github.com/sartography/spiff-example-cli/blob/main/bpmn/tutorial/product_prices.dmn>`_ DMN table
User Tasks
^^^^^^^^^^
User tasks would typically be used in the case where the task would be
completed from within the application.
User tasks can include forms that ask the user questions. When you click on a
user task in a BPMN modeler, the Properties Panel includes a form tab. Use this
tab to build your questions.
User Tasks would typically be used in the case where the task would be completed from within the
application. Our User tasks present forms that collect data from users.
We'll ask our hypothetical user to choose a product and quantity.
The following example shows how a form might be set up in Camumda.
.. figure:: figures/tasks/user_task.png
:scale: 30%
:align: center
.. figure:: figures/user_task.png
:scale: 30%
:align: center
User Task
User Task configuration
We can use the form builder to create the form.
.. note::
.. figure:: figures/tasks/user_task_form.png
:scale: 30%
:align: center
SpiffWorkflow has some basic support for the free Camunda modeler, to use its
form building capabilities, but we intend to encapsulate this support in an
extension module and remove it from the core library eventually.
User Task form
See the `Handling User Tasks`_ section for a discussion of sample code.
We have also retained some limited support for the now deprecated
camunda forms, which you can read about in our Camunda Specific section on :doc:`camunda/tasks`.
Business Rule Tasks
^^^^^^^^^^^^^^^^^^^
In our business rule task, we'll use a DMN table to look up the price of the
In our Business Rule Task, we'll use a DMN table to look up the price of the
product the user chose.
We'll need to create a DMN table.
@ -67,7 +67,7 @@ the decision lookup allows the next gateway or activity to route the flow.
Our Business Rule Task will make use of a DMN table.
.. figure:: figures/dmn_table.png
.. figure:: figures/tasks/dmn_table.png
:scale: 30%
:align: center
@ -81,7 +81,7 @@ Our Business Rule Task will make use of a DMN table.
Then we'll refer to this table in the task configuration.
.. figure:: figures/business_rule_task.png
.. figure:: figures/tasks/business_rule_task.png
:scale: 30%
:align: center
@ -91,9 +91,9 @@ Script Tasks
^^^^^^^^^^^^
The total order cost will need to be calculated on the fly. We can do this in
a script task. We'll configure the task with some simple Python code.
a Script Task. We'll configure the task with some simple Python code.
.. figure:: figures/script_task.png
.. figure:: figures/tasks/script_task.png
:scale: 30%
:align: center
@ -105,36 +105,45 @@ have been defined previously will be available to it.
Manual Tasks
^^^^^^^^^^^^
Our final task type is a manual task. We would use this task in the situation
where the application might simply need to mark a task that requires user
involvement complete without gathering any additional information from them.
Our final task type is a Manual Task. Manual Tasks represent work that occures
outside of SpiffWorkflow's control. Say that you need to include a step in a
process where the participant needs to stand up, walk over to the coffee maker,
and poor the cup of coffee. Manual Tasks pause the process, and wait for
confirmation that the step was completed.
There is no special configuration for manual tasks. However, this is a good
place to note that we can use the BPMN element Documentation field to display
more information about the context of the item.
Text that will be displayed to the user is added in the "Instructions" panel.
Spiff is set up in a way that you could use any templating library you want, but
we have used `Jinja <https://jinja.palletsprojects.com/en/3.0.x/>`_.
In this example, we'll present an order summary to our customer.
.. figure:: figures/documentation.png
.. figure:: figures/tasks/manual_task.png
:scale: 30%
:align: center
Element Documentation
Manual Task
Spiff's manual tasks may contain references to data inside the workflow. We have used
`Jinja <https://jinja.palletsprojects.com/en/3.0.x/>`_, but Spiff is set up in a way that
you could use any templating library you want, as well as Markdown formatting directives
(we won't implement those here though, because it doesn't make sense for a command
line app).
.. figure:: figures/tasks/manual_task_instructions.png
:scale: 30%
:align: center
Editing Instructions
See the `Handling Manual Tasks`_ section for a discussion of sample code.
For information about how Spiff handles Manual Tasks created with Camunda please
refer to the Camunda Specific section on :doc:`camunda/tasks`.
Running The Model
^^^^^^^^^^^^^^^^^
If you have set up our example repository, this model can be run with the
following command:
If you have set up our example repository, this model can be run with the following command:
.. code-block:: console
./run.py -p order_product -d bpmn/product_prices.dmn -b bpmn/task_types.bpmn
./spiff-bpmn-runner.py -p order_product -d bpmn/tutorial/product_prices.dmn -b bpmn/tutorial/task_types.bpmn
Example Application Code
------------------------
@ -147,50 +156,49 @@ responses.
.. code:: python
for field in task.task_spec.form.fields:
if isinstance(field, EnumFormField):
option_map = dict([ (opt.name, opt.id) for opt in field.options ])
options = "(" + ', '.join(option_map) + ")"
prompt = f"{field.label} {options} "
option = select_option(prompt, option_map.keys())
response = option_map[option]
else:
response = input(f"{field.label} ")
if field.type == "long":
response = int(response)
task.update_data_var(field.id, response)
filename = task.task_spec.extensions['properties']['formJsonSchemaFilename']
schema = json.load(open(os.path.join(forms_dir, filename)))
for field, config in schema['properties'].items():
if 'oneOf' in config:
option_map = dict([ (v['title'], v['const']) for v in config['oneOf'] ])
options = "(" + ', '.join(option_map) + ")"
prompt = f"{field} {options} "
option = input(prompt)
while option not in option_map:
print(f'Invalid selection!')
option = input(prompt)
response = option_map[option]
else:
response = input(f"{config['title']} ")
if config['type'] == 'integer':
response = int(response)
task.data[field] = response
The list of form fields for a task is stored in :code:`task.task_spec.form_fields`.
SpiffWorkflow uses JSON Schema to represent forms, specifically
`react-jsonschema-form <https://react-jsonschema-form.readthedocs.io/en/latest/>`_.
Our forms are really intended to be displayed in a browser, and attempting to handle them in a command
line appliction is a little awkward. The form specifications can be quite complex.
For Enumerated fields, we want to get the possible options and present them to the
user. The variable names of the fields were stored in :code:`field.id`, but since
we set labels for each of the fields, we'd like to display those instead, and map
the user's selection back to the variable name.
This simple implementation will present a list of options for simple enumerated fields and simply
directly stores whatever the user enters otherwise, with integer conversions if the field is so
specified. This is robust enough to collect enough information from a user to make it through our example.
Our :code:`select_option` function simply repeats the prompt until the user
enters a value contained in the option list.
For other fields, we'll just store whatever the user enters, although in the case
where the data type was specified to be a :code:`long`, we'll convert it to a
number.
Finally, we need to explicitly store the user-provided response in a variable
with the expected name with :code:`task.update_data_var(field.id, response)`.
SpiffWorkflow provides a mechanism for you to provide your own form specification and leaves it up to you
to decide how to present it.
Handling Business Rule Tasks
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
We do not need to do any special configuration to handle these business rule
tasks. SpiffWorkflow does it all for us.
We do not need to do any special configuration to handle these Business Rule Tasks. SpiffWorkflow does it all for us.
Handling Script Tasks
^^^^^^^^^^^^^^^^^^^^^
We do not need to do any special configuration to handle script tasks, although it
We do not need to do any special configuration to handle Script Tasks, although it
is possible to implement a custom script engine. We demonstrate that process in
Custom Script Engines section :doc:`advanced` features. However, the default script
engine will work in many cases.
engine will be adequate for now.
Handling Manual Tasks
^^^^^^^^^^^^^^^^^^^^^
@ -201,21 +209,26 @@ completed.
.. code:: python
def complete_manual_task(task):
display_task(task)
display_instructions(task)
input("Press any key to mark task complete")
:code:`display_task()` is the code for converting the Documentation property of the task
into something that can be presented to the user.
:code:`display_instructions` handles presenting the task to the user.
.. code:: python
def display_task(task):
print(f'\n{task.task_spec.description}')
if task.task_spec.documentation is not None:
template = Template(task.task_spec.documentation)
def display_instructions(task):
text = task.task_spec.extensions.get('instructionsForEndUser')
print(f'\n{task.task_spec.bpmn_name}')
if text is not None:
template = Template(text)
print(template.render(task.data))
The template string can be obtained from :code:`task.task_spec.documentation`.
The template string can be obtained from :code:`task.task_spec.extensions.get('instructionsForEndUser')`.
As noted above, our template class comes from Jinja. We render the template
using the task data, which is just a dictionary.
.. note::
Most of Spiff's task specifications contain this extension, not just Manual Tasks. We also use it to display
information along with forms, and about certain events.

120
doc/concepts.rst Normal file
View File

@ -0,0 +1,120 @@
SpiffWorkflow Concepts
======================
Specifications vs. Instances
----------------------------
SpiffWorkflow consists of two different categories of objects:
- **Specification objects**, which represent the definitions and derive from :code:`WorkflowSpec` and :code:`TaskSpec`
- **Instance objects**, which represent the state of a running workflow (:code:`Workflow`, :code:`BpmnWorkflow` and :code:`Task`)
In the workflow context, a specification is model of the workflow, an abstraction that describes *every path that could
be taken whenever the workflow is executed*. An instance is a particular instantiation of a specification. It describes *the
current state* or *the path(s) that were actually taken when the workflow ran*.
In the task context, a specification is a model for how a task behaves. It describes the mechanisms for deciding *whether
there are preconditions for running an associated task*, *how to decide whether they are met*, and *what it means to complete
(successfully or unsuccessfully)*. An instance describes the *state of the task, as it pertains to a particular workflow* and
*contains the data used to manage that state*.
Specifications are unique, whereas instances are not. There is *one* model of a workflow, and *one* specification for a particular task.
Imagine a workflow with a loop. The loop is defined once in the specification, but there can be many tasks associated with
each of the specs that comprise the loop.
In our BPMN example, described a product selection process.::
Start -> Select and Customize Product -> Continue Shopping?
Since the customer can potentially select more than one product, how our instance looks depends on the customer's actions. If
they choose three products, then we get the following tree::
Start --> Select and Customize Product -> Continue Shopping?
|-> Select and Customize Product -> Continue Shopping?
|-> Select and Customize Product -> Continue Shopping?
There is *one* TaskSpec describing product selection and customization and *one* TaskSpec that determines whether to add more
items, but it may execute any number of imes, resulting in as many Tasks for these TaskSpecs as the number of products the
customer selects.
Understanding Task States
-------------------------
* **PREDICTED** Tasks
A predicted task is one that will possibly, but not necessarily run at a future time. For example, if a task follows a
conditional gateway, which path is taken won't be known until the gateway is reached and the conditions evaluated. There
are two types of predicted tasks:
- **MAYBE**: The task is part of a conditional path
- **LIKELY** : The task is the default output on a conditional path
* **DEFINITE** Tasks
Definite tasks are certain to run as the workflow pregresses.
- **FUTURE**: The task will definitely run.
- **WAITING**: A condition must be met before the task can become **READY**
- **READY**: The preconditions for running this task have been met
- **STARTED**: The task has started running but has not finished
* **FINISHED** Tasks
A finished task is one where no further action will be taken.
- **COMPLETED**: The task finished successfully.
- **ERROR**: The task finished unsucessfully.
- **CANCELLED**: The task was cancelled before it ran or while it was running.
Tasks start in either a **PREDICTED** or **FUTURE** state, move through one or more **DEFINITE** states, and end in a
**FINISHED** state. State changes are determined by several task spec methods:
Hooks
=======
SpiffWorkflow executes a Task by calling a series of hooks that are tightly coupled
to Task State. These hooks are:
* `_update_hook`: This method will be run by a task's predecessor when the predecessor completes. The method checks the
preconditions for running the task and returns a boolean indicating whether a task should become **READY**. Otherwise,
the state will be set to **WAITING**.
* `_on_ready_hook`: This method will be run when the task becomes **READY** (but before it runs).
* `run_hook`: This method implements the task's behavior when it is run, returning:
- :code:`True` if the task completed successfully. The state will transition to **COMPLETED**.
- :code:`False` if the task completed unsucessfully. The state will transition to **ERRROR**.
- :code:`None` if the task has not completed. The state will transition to **STARTED**.
* `_on_complete_hook`: This method will be run when the task's state is changed to **COMPLETED**.
* `_on_error_hook`: This method will be run when the task's state is changed to **ERROR**.
* `_on_trigger`: This method executes the task's behavior when it is triggered (`Trigger` tasks only).
Task Prediction
---------------
Each TaskSpec also has a `_predict_hook` method, which is used to set the state of not-yet-executed children. The behavior
of `_predict_hook` varies by TaskSpec. This is the mechanism that determines whether Tasks are **FUTURE**, **LIKELY**, or
**MAYBE**. When a workflow is created, a task tree is generated that contains all definite paths, and branches of
**PREDICTED** tasks with a maximum length of two. If a **PREDICTED** task becomes **DEFINITE**, the Task's descendants
are re-predicted. If it's determined that a **PREDICTED** will not run, the task and all its descendants will be dropped
from the tree. By default `_on_predict_hook` will ignore **DEFINITE** tasks, but this can be overridden by providing a
mask of `TaskState` values that specifies states other than **PREDICTED**.
Where Data is Stored
--------------------
Data can ba associated with worklows in the following ways:
- **Workflow data** is stored on the Workflow, with changes affecting all Tasks.
- **Task data** is local to the Task, initialized from the data of the Task's parent.
- **Task internal data** is local to the Task and not passed to the Task's children
- **Task spec data** is stored in the TaskSpec object, and if updated, the updates will apply to any Task that references the spec
(unused by the :code:`bpmn` package and derivatives).

View File

@ -18,11 +18,11 @@
# -- Project information -----------------------------------------------------
project = 'SpiffWorkflow'
copyright = '2022, Sartography'
copyright = '2023, Sartography'
author = 'Sartography'
# The full version, including alpha/beta/rc tags
release = '1.2.1'
release = '2.0.0rc0'
# -- General configuration ---------------------------------------------------

View File

@ -7,9 +7,8 @@ Introduction
In this second tutorial, we are going to implement our own task, and
use serialization and deserialization to store and restore it.
If you haven't already, you should complete the first
:doc:`../tutorial/index`.
We are also assuming that you are familiar with the :doc:`../basics`.
If you haven't already, you should complete the first :doc:`../tutorial/index`.
We are also assuming that you are familiar with the :doc:`../../concepts`.
Implementing the custom task
----------------------------

View File

@ -1,13 +1,12 @@
import json
from SpiffWorkflow import Workflow
from SpiffWorkflow.specs import WorkflowSpec
from SpiffWorkflow.workflow import Workflow
from SpiffWorkflow.specs.WorkflowSpec import WorkflowSpec
from serializer import NuclearSerializer
# Load from JSON
with open('nuclear.json') as fp:
workflow_json = fp.read()
serializer = NuclearSerializer()
spec = WorkflowSpec.deserialize(serializer, workflow_json)
nuclear_serializer = NuclearSerializer()
spec = WorkflowSpec.deserialize(nuclear_serializer, workflow_json)
# Create the workflow.
workflow = Workflow(spec)
@ -15,4 +14,4 @@ workflow = Workflow(spec)
# Execute until all tasks are done or require manual intervention.
# For the sake of this tutorial, we ignore the "manual" flag on the
# tasks. In practice, you probably don't want to do that.
workflow.complete_all(halt_on_manual=False)
workflow.run_all(halt_on_manual=False)

View File

@ -1,4 +1,4 @@
from SpiffWorkflow.specs import Simple
from SpiffWorkflow.specs.Simple import Simple
class NuclearStrike(Simple):
def _on_complete_hook(self, my_task):

14
doc/core/index.rst Normal file
View File

@ -0,0 +1,14 @@
Core Library
============
SpiffWorkflow's BPMN support is built on top of a core library that aims to be a general workflow
execution environment. Workflow specifications can be created from a simple XML format, or even
easily in python code. It supports a wide range of task specifications and workflow patterns, making
it amenable to adaptation to many different schemas for defining workflow behavior.
.. toctree::
:maxdepth: 2
tutorial/index
custom-tasks/index
patterns

View File

@ -5,7 +5,7 @@ Supported Workflow Patterns
.. HINT::
All examples are located
`here <https://github.com/knipknap/SpiffWorkflow/blob/master/tests/SpiffWorkflow/data/spiff/>`_.
`here <https://github.com/sartography/SpiffWorkflow/tree/main/tests/SpiffWorkflow/core/data>`_
Control-Flow Patterns
---------------------

View File

@ -7,7 +7,7 @@ Introduction
In this chapter we are going to use Spiff Workflow to solve a real-world
problem: We will create a workflow for triggering a nuclear strike.
We are assuming that you are familiar with the :doc:`../basics`.
We are assuming that you are familiar with the :doc:`../../concepts`.
Assume you want to send the rockets, but only after both the president and
a general have signed off on it.

View File

@ -2,23 +2,23 @@
"task_specs": {
"Start": {
"class": "SpiffWorkflow.specs.StartTask.StartTask",
"id" : 1,
"id" : 1,
"manual": false,
"outputs": [
2
"general"
]
},
"general": {
"class": "SpiffWorkflow.specs.ExclusiveChoice.ExclusiveChoice",
"name": "general",
"id" : 2,
"id" : 2,
"manual": true,
"inputs": [
1
"Start"
],
"outputs": [
5,
3
"workflow_aborted",
"president"
],
"choice": null,
"default_task_spec": "workflow_aborted",
@ -44,14 +44,14 @@
"president": {
"class": "SpiffWorkflow.specs.ExclusiveChoice.ExclusiveChoice",
"name": "president",
"id" : 3,
"id" : 3,
"manual": true,
"inputs": [
2
"general"
],
"outputs": [
5,
4
"workflow_aborted",
"nuclear_strike"
],
"choice": null,
"default_task_spec": "workflow_aborted",
@ -75,11 +75,11 @@
]
},
"nuclear_strike": {
"id" : 4,
"id" : 4,
"class": "SpiffWorkflow.specs.Simple.Simple",
"name": "nuclear_strike",
"inputs": [
3
"president"
]
},
"workflow_aborted": {
@ -87,8 +87,8 @@
"class": "SpiffWorkflow.specs.Cancel.Cancel",
"name": "workflow_aborted",
"inputs": [
2,
3
"general",
"president"
]
}
},

Some files were not shown because too many files have changed in this diff Show More