Squashed 'SpiffWorkflow/' changes from 63db3e459..9d597627b
9d597627b Merge pull request #250 from sartography/feature/documentation-updates-for-1.2 896447557 Merge pull request #249 from sartography/feature/improved-logging 10b710498 remove comments 3e8a29d6f Merge branch 'main' into feature/improved-logging ca8a88d9e first draft of updated docs f0bb63294 remove one unnecessary log statement during BpmnWorkflow init and add some debug info about the other two acce225e0 do not log task & workflow changes when deserializing git-subtree-dir: SpiffWorkflow git-subtree-split: 9d597627b46d236c684ca5d62cae16cfed6e1dec
This commit is contained in:
parent
0892db6fa7
commit
c04049105f
|
@ -1,11 +1,8 @@
|
|||
A More In-Depth Look at Some of SpiffWorkflow's Features
|
||||
========================================================
|
||||
|
||||
Displaying Workflow State
|
||||
-------------------------
|
||||
|
||||
Filtering Tasks
|
||||
^^^^^^^^^^^^^^^
|
||||
---------------
|
||||
|
||||
In our earlier example, all we did was check the lane a task was in and display
|
||||
it along with the task name and state.
|
||||
|
@ -36,203 +33,49 @@ correspond to which states).
|
|||
|
||||
.. code:: python
|
||||
|
||||
from SpiffWorkflow.task import Task
|
||||
from SpiffWorkflow.task import TaskState
|
||||
|
||||
To get a list of completed tasks
|
||||
|
||||
.. code:: python
|
||||
|
||||
tasks = workflow.get_tasks(Task.COMPLETED)
|
||||
tasks = workflow.get_tasks(TaskState.COMPLETED)
|
||||
|
||||
The tasks themselves are not particularly intuitive to work with. So SpiffWorkflow
|
||||
provides some facilities for obtaining a more user-friendly version of upcoming tasks.
|
||||
|
||||
Nav(igation) List
|
||||
^^^^^^^^^^^^^^^^^
|
||||
Logging
|
||||
-------
|
||||
|
||||
In order to get the navigation list, we can call the workflow.get_nav_list() function. This
|
||||
will return a list of dictionaries with information about each task and decision point in the
|
||||
workflow. Each item in this list returns some information about the tasks that are in the workflow,
|
||||
and how it relates to the other tasks.
|
||||
Spiff provides several loggers:
|
||||
- the :code:`spiff` logger, which emits messages when a workflow is initialized and when tasks change state
|
||||
- the :code:`spiff.metrics` logger, which emits messages containing the elapsed duration of tasks
|
||||
- the :code:`spiff.data` logger, which emits message when task or workflow data is updated.
|
||||
|
||||
To give you an idea of what is in the list I'll include a segment from the documentation::
|
||||
Log level :code:`INFO` will provide reasonably detailed information about state changes.
|
||||
|
||||
id - TaskSpec or Sequence flow id
|
||||
task_id - The uuid of the actual task instance, if it exists.
|
||||
name - The name of the task spec (or sequence)
|
||||
description - Text description
|
||||
backtracks - Boolean, if this backtracks back up the list or not
|
||||
level - Depth in the tree - probably not needed
|
||||
indent - A hint for indentation
|
||||
child_count - The number of children that should be associated with
|
||||
this item.
|
||||
lane - This is the swimlane for the task if indicated.
|
||||
state - Text based state (may be half baked in the case that we have
|
||||
more than one state for a task spec - but I don't think those
|
||||
are being reported in the list, so it may not matter)
|
||||
Any task with a blank or None as the description are excluded from the list (i.e. gateways)
|
||||
As usual, log level :code:`DEBUG` will probably provide more logs than you really want
|
||||
to see, but the logs will contain the task and task internal data.
|
||||
|
||||
|
||||
Because the output from this list may be used in a variety of contexts, the implementation is left to the user.
|
||||
|
||||
MultiInstance Notes
|
||||
-------------------
|
||||
|
||||
**loopCardinality** - This variable can be a text representation of a
|
||||
number - for example '2' or it can be the name of a variable in
|
||||
task.data that resolves to a text representation of a number.
|
||||
It can also be a collection such as a list or a dictionary. In the
|
||||
case that it is a list, the loop cardinality is equal to the length of
|
||||
the list and in the case of a dictionary, it is equal to the list of
|
||||
the keys of the dictionary.
|
||||
|
||||
If loopCardinality is left blank and the Collection is defined, or if
|
||||
loopCardinality and Collection are the same collection, then the
|
||||
MultiInstance will loop over the collection and update each element of
|
||||
that collection with the new information. In this case, it is assumed
|
||||
that the incoming collection is a dictionary, currently behavior for
|
||||
working with a list in this manner is not defined and will raise an error.
|
||||
|
||||
**Collection** This is the name of the collection that is created from
|
||||
the data generated when the task is run. Examples of this would be
|
||||
form data that is generated from a UserTask or data that is generated
|
||||
from a script that is run. Currently the collection is built up to be
|
||||
a dictionary with a numeric key that corresponds to the place in the
|
||||
loopCardinality. For example, if we set the loopCardinality to be a
|
||||
list such as ['a','b','c] the resulting collection would be {1:'result
|
||||
from a',2:'result from b',3:'result from c'} - and this would be true
|
||||
even if it is a parallel MultiInstance where it was filled out in a
|
||||
different order.
|
||||
|
||||
**Element Variable** This is the variable name for the current
|
||||
iteration of the MultiInstance. In the case of the loopCardinality
|
||||
being just a number, this would be 1,2,3, . . . If the
|
||||
loopCardinality variable is mapped to a collection it would be either
|
||||
the list value from that position, or it would be the value from the
|
||||
dictionary where the keys are in sorted order. It is the content of the
|
||||
element variable that should be updated in the task.data. This content
|
||||
will then be added to the collection each time the task is completed.
|
||||
|
||||
Example:
|
||||
In a sequential MultiInstance, loop cardinality is ['a','b','c'] and elementVariable is 'myvar'
|
||||
then in the case of a sequential multiinstance the first call would
|
||||
have 'myvar':'a' in the first run of the task and 'myvar':'b' in the
|
||||
second.
|
||||
|
||||
Example:
|
||||
In a Parallel MultiInstance, Loop cardinality is a variable that contains
|
||||
{'a':'A','b':'B','c':'C'} and elementVariable is 'myvar' - when the multiinstance is ready, there
|
||||
will be 3 tasks. If we choose the second task, the task.data will
|
||||
contain 'myvar':'B'.
|
||||
|
||||
Custom Script Engines
|
||||
---------------------
|
||||
|
||||
You may need to modify the default script engine, whether because you need to make additional
|
||||
functionality available to it, or because you might want to restrict its capabilities for
|
||||
security reasons.
|
||||
|
||||
.. warning::
|
||||
|
||||
The default script engine does little to no sanitization and uses :code:`eval`
|
||||
and :code:`exec`! If you have security concerns, you should definitely investigate
|
||||
replacing the default with your own implementation.
|
||||
|
||||
The default script engine imports the following objects:
|
||||
|
||||
- :code:`timedelta`
|
||||
- :code:`datetime`
|
||||
- :code:`dateparser`
|
||||
- :code:`pytz`
|
||||
|
||||
You could add other functions or classes from the standard python modules or any code you've
|
||||
implemented yourself.
|
||||
|
||||
In our example models so far, we've been using DMN tables to obtain product information. DMN
|
||||
tables have a **lot** of uses so we wanted to feature them prominently, but in a simple way.
|
||||
|
||||
If a customer was selecting a product, we would surely have information about how the product
|
||||
could be customized in a database somewhere. We would not hard code product information in
|
||||
our diagram (although it is much easier to modify the BPMN diagram than to change the code
|
||||
itself!). Our shipping costs would not be static, but would depend on the size of the order and
|
||||
where it was being shipped -- maybe we'd query an API provided by our shipper.
|
||||
|
||||
SpiffWorkflow is obviously **not** going to know how to make a call to **your** database or
|
||||
make API calls to **your** vendors. However, you can implement the calls yourself and make them
|
||||
available as a method that can be used within a script task.
|
||||
|
||||
We are not going to actually include a database or API and write code for connecting to and querying
|
||||
it, but we can model our database with a simple dictionary lookup since we only have 7 products
|
||||
and just return the same static info for shipping for the purposes of the tutorial.
|
||||
Data can be included at any level less than :code:`INFO`. In our exmple application,
|
||||
we define a custom log level
|
||||
|
||||
.. code:: python
|
||||
|
||||
from collections import namedtuple
|
||||
logging.addLevelName(15, 'DATA_LOG')
|
||||
|
||||
from SpiffWorkflow.bpmn.PythonScriptEngine import PythonScriptEngine
|
||||
so that we can see the task data in the logs without fully enabling debugging.
|
||||
|
||||
ProductInfo = namedtuple('ProductInfo', ['color', 'size', 'style', 'price'])
|
||||
|
||||
INVENTORY = {
|
||||
'product_a': ProductInfo(False, False, False, 15.00),
|
||||
'product_b': ProductInfo(False, False, False, 15.00),
|
||||
'product_c': ProductInfo(True, False, False, 25.00),
|
||||
'product_d': ProductInfo(True, True, False, 20.00),
|
||||
'product_e': ProductInfo(True, True, True, 25.00),
|
||||
'product_f': ProductInfo(True, True, True, 30.00),
|
||||
'product_g': ProductInfo(False, False, True, 25.00),
|
||||
}
|
||||
|
||||
def lookup_product_info(product_name):
|
||||
return INVENTORY[product_name]
|
||||
|
||||
def lookup_shipping_cost(shipping_method):
|
||||
return 25.00 if shipping_method == 'Overnight' else 5.00
|
||||
|
||||
additions = {
|
||||
'lookup_product_info': lookup_product_info,
|
||||
'lookup_shipping_cost': lookup_shipping_cost
|
||||
}
|
||||
|
||||
CustomScriptEngine = PythonScriptEngine(scriptingAdditions=additions)
|
||||
|
||||
We pass the script engine we created to the workflow when we load it.
|
||||
|
||||
.. code:: python
|
||||
|
||||
return BpmnWorkflow(parser.get_spec(process), script_engine=CustomScriptEngine)
|
||||
|
||||
We can use the custom functions in script tasks like any normal function:
|
||||
|
||||
.. figure:: figures/custom_script_usage.png
|
||||
:scale: 30%
|
||||
:align: center
|
||||
|
||||
Workflow with lanes
|
||||
|
||||
And we can simplify our 'Call Activity' flows:
|
||||
|
||||
.. figure:: figures/call_activity_script_flow.png
|
||||
:scale: 30%
|
||||
:align: center
|
||||
|
||||
Workflow with lanes
|
||||
|
||||
To run this workflow:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
./run.py -p order_product -b bpmn/call_activity_script.bpmn bpmn/top_level_script.bpmn
|
||||
|
||||
We have also done some work using `Restricted Python <https://restrictedpython.readthedocs.io/en/latest/>`_
|
||||
to provide more secure alternatives to standard python functions.
|
||||
The workflow runners take an `-l` argument that can be used to specify the logging level used
|
||||
when running the example workflows.
|
||||
|
||||
Serialization
|
||||
-------------
|
||||
|
||||
.. warning::
|
||||
|
||||
Serialization Changed in Version 1.1.7. Support for pre-1.1.7 serialization will be dropped in 1.2.
|
||||
Serialization Changed in Version 1.1.7.
|
||||
Support for pre-1.1.7 serialization will be dropped in a future release.
|
||||
The old serialization method still works but it is deprecated.
|
||||
To migrate your system to the new version, see "Migrating between
|
||||
serialization versions" below.
|
||||
|
@ -242,37 +85,32 @@ setting. This may not always be the case, we may be executing the workflow in th
|
|||
may have a user request a web page where we open a specific workflow that we may be in the middle of, do one step of
|
||||
that workflow and then the user may be back in a few minutes, or maybe a few hours depending on the application.
|
||||
|
||||
To accomplish this, we can import the serializer
|
||||
|
||||
.. code:: python
|
||||
|
||||
from SpiffWorkflow.bpmn.serializer import BpmnWorkflowSerializer
|
||||
|
||||
This class contains a serializer for a workflow containing only standard BPMN Tasks. Since we are using custom task
|
||||
classes (the Camunda :code:`UserTask` and the DMN :code:`BusinessRuleTask`), we'll need to import serializers for those task s
|
||||
pecs as well.
|
||||
|
||||
.. code:: python
|
||||
|
||||
from SpiffWorkflow.camunda.serializer import UserTaskConverter
|
||||
from SpiffWorkflow.dmn.serializer import BusinessRuleTaskConverter
|
||||
The :code:`BpmnWorkflowSerializer` class contains a serializer for a workflow containing only standard BPMN Tasks.
|
||||
Since we are using custom task classes (the Camunda :code:`UserTask` and the DMN :code:`BusinessRuleTask`),
|
||||
we'll need to supply serializers for those task specs as well.
|
||||
|
||||
Strictly speaking, these are not serializers per se: they actually convert the tasks into dictionaries of
|
||||
JSON-serializable objects. Conversion to JSON is done only as the last step and could easily be replaced with some
|
||||
other output format.
|
||||
|
||||
We'll need to configure a Workflow Spec Converter with our custom classes:
|
||||
We'll need to configure a Workflow Spec Converter with our custom classes, as well as an optional
|
||||
custom data converter.
|
||||
|
||||
.. code:: python
|
||||
|
||||
wf_spec_converter = BpmnWorkflowSerializer.configure_workflow_spec_converter(
|
||||
[ UserTaskConverter, BusinessRuleTaskConverter ])
|
||||
def create_serializer(task_types, data_converter=None):
|
||||
|
||||
We create a serializer that can handle our extended task specs:
|
||||
wf_spec_converter = BpmnWorkflowSerializer.configure_workflow_spec_converter(task_types)
|
||||
return BpmnWorkflowSerializer(wf_spec_converter, data_converter)
|
||||
|
||||
We'll call this from our main script:
|
||||
|
||||
.. code:: python
|
||||
|
||||
serializer = BpmnWorkflowSerializer(wf_spec_converter)
|
||||
serializer = create_serializer([ UserTaskConverter, BusinessRuleTaskConverter ], custom_data_converter)
|
||||
|
||||
We first configure a workflow spec converter that uses our custom task converters, and then we create
|
||||
a :code:`BpmnWorkflowSerializer` from our workflow spec and data converters.
|
||||
|
||||
We'll give the user the option of dumping the workflow at any time.
|
||||
|
||||
|
@ -300,15 +138,15 @@ two components:
|
|||
- a data converter (which handles workflow and task data).
|
||||
|
||||
The default workflow spec converter likely to meet your needs, either on its own, or with the inclusion of
|
||||
:code:`UserTask` and :code:`BusinessRuleTask` in the :code:`camnuda` and :code:`dmn` subpackages of this
|
||||
library, and all you'll need to do is add them to the list of task converters, as we did above.
|
||||
:code:`UserTask` and :code:`BusinessRuleTask` in the :code:`camnuda` or :code:`spiff` and :code:`dmn` subpackages
|
||||
of this library, and all you'll need to do is add them to the list of task converters, as we did above.
|
||||
|
||||
However, he default data converter is very simple, adding only JSON-serializable conversions of :code:`datetime`
|
||||
and :code:`timedelta` objects (we make these available in our default script engine) and UUIDs. If your
|
||||
workflow or task data contains objects that are not JSON-serializable, you'll need to extend ours, or extend
|
||||
its base class to create one of your own.
|
||||
|
||||
To do extend ours:
|
||||
To extend ours:
|
||||
|
||||
1. Subclass the base data converter
|
||||
2. Register classes along with functions for converting them to and from dictionaries
|
||||
|
@ -421,3 +259,163 @@ new 1.1 format.
|
|||
If you've overridden the serializer version, you may need to incorporate our serialization changes with
|
||||
your own. You can find our conversions in
|
||||
`version_migrations.py <https://github.com/sartography/SpiffWorkflow/blob/main/SpiffWorkflow/bpmn/serializer/version_migration.py>`_
|
||||
|
||||
Custom Script Engines
|
||||
---------------------
|
||||
|
||||
You may need to modify the default script engine, whether because you need to make additional
|
||||
functionality available to it, or because you might want to restrict its capabilities for
|
||||
security reasons.
|
||||
|
||||
.. warning::
|
||||
|
||||
The default script engine does little to no sanitization and uses :code:`eval`
|
||||
and :code:`exec`! If you have security concerns, you should definitely investigate
|
||||
replacing the default with your own implementation.
|
||||
|
||||
We'll cover a simple extension of custom script engine here. There is also an examples of
|
||||
a similar engine based on `RestrictedPython <https://restrictedpython.readthedocs.io/en/latest/>`_
|
||||
included alongside this example.
|
||||
|
||||
The default script engine imports the following objects:
|
||||
|
||||
- :code:`timedelta`
|
||||
- :code:`datetime`
|
||||
- :code:`dateparser`
|
||||
- :code:`pytz`
|
||||
|
||||
You could add other functions or classes from the standard python modules or any code you've
|
||||
implemented yourself. Your global environment can be passed in using the `default_globals`
|
||||
argument when initializing the script engine. In our RestrictedPython example, we use their
|
||||
`safe_globals` which prevents users from executing some potentially unsafe operations.
|
||||
|
||||
In our example models so far, we've been using DMN tables to obtain product information. DMN
|
||||
tables have a **lot** of uses so we wanted to feature them prominently, but in a simple way.
|
||||
|
||||
If a customer was selecting a product, we would surely have information about how the product
|
||||
could be customized in a database somewhere. We would not hard code product information in
|
||||
our diagram (although it is much easier to modify the BPMN diagram than to change the code
|
||||
itself!). Our shipping costs would not be static, but would depend on the size of the order and
|
||||
where it was being shipped -- maybe we'd query an API provided by our shipper.
|
||||
|
||||
SpiffWorkflow is obviously **not** going to know how to make a call to **your** database or
|
||||
make API calls to **your** vendors. However, you can implement the calls yourself and make them
|
||||
available as a method that can be used within a script task.
|
||||
|
||||
We are not going to actually include a database or API and write code for connecting to and querying
|
||||
it, but we can model our database with a simple dictionary lookup since we only have 7 products
|
||||
and just return the same static info for shipping for the purposes of the tutorial.
|
||||
|
||||
.. code:: python
|
||||
|
||||
from collections import namedtuple
|
||||
|
||||
from SpiffWorkflow.bpmn.PythonScriptEngine import PythonScriptEngine
|
||||
|
||||
ProductInfo = namedtuple('ProductInfo', ['color', 'size', 'style', 'price'])
|
||||
|
||||
INVENTORY = {
|
||||
'product_a': ProductInfo(False, False, False, 15.00),
|
||||
'product_b': ProductInfo(False, False, False, 15.00),
|
||||
'product_c': ProductInfo(True, False, False, 25.00),
|
||||
'product_d': ProductInfo(True, True, False, 20.00),
|
||||
'product_e': ProductInfo(True, True, True, 25.00),
|
||||
'product_f': ProductInfo(True, True, True, 30.00),
|
||||
'product_g': ProductInfo(False, False, True, 25.00),
|
||||
}
|
||||
|
||||
def lookup_product_info(product_name):
|
||||
return INVENTORY[product_name]
|
||||
|
||||
def lookup_shipping_cost(shipping_method):
|
||||
return 25.00 if shipping_method == 'Overnight' else 5.00
|
||||
|
||||
additions = {
|
||||
'lookup_product_info': lookup_product_info,
|
||||
'lookup_shipping_cost': lookup_shipping_cost
|
||||
}
|
||||
|
||||
CustomScriptEngine = PythonScriptEngine(scripting_additions=additions)
|
||||
|
||||
We pass the script engine we created to the workflow when we load it.
|
||||
|
||||
.. code:: python
|
||||
|
||||
return BpmnWorkflow(parser.get_spec(process), script_engine=CustomScriptEngine)
|
||||
|
||||
We can use the custom functions in script tasks like any normal function:
|
||||
|
||||
.. figure:: figures/custom_script_usage.png
|
||||
:scale: 30%
|
||||
:align: center
|
||||
|
||||
Workflow with lanes
|
||||
|
||||
And we can simplify our 'Call Activity' flows:
|
||||
|
||||
.. figure:: figures/call_activity_script_flow.png
|
||||
:scale: 30%
|
||||
:align: center
|
||||
|
||||
Workflow with lanes
|
||||
|
||||
To run this workflow:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
./run.py -p order_product -b bpmn/call_activity_script.bpmn bpmn/top_level_script.bpmn
|
||||
|
||||
It is also possible to completely replace `exec` and `eval` with something else, or to
|
||||
execute or evaluate statements in a completely separate environment by subclassing the
|
||||
:code:`PythonScriptEngine` and overriding `_execute` and `_evaluate`. We have examples of
|
||||
executing code inside a docker container or in a celery task i this repo.
|
||||
|
||||
MultiInstance Notes
|
||||
-------------------
|
||||
|
||||
**loopCardinality** - This variable can be a text representation of a
|
||||
number - for example '2' or it can be the name of a variable in
|
||||
task.data that resolves to a text representation of a number.
|
||||
It can also be a collection such as a list or a dictionary. In the
|
||||
case that it is a list, the loop cardinality is equal to the length of
|
||||
the list and in the case of a dictionary, it is equal to the list of
|
||||
the keys of the dictionary.
|
||||
|
||||
If loopCardinality is left blank and the Collection is defined, or if
|
||||
loopCardinality and Collection are the same collection, then the
|
||||
MultiInstance will loop over the collection and update each element of
|
||||
that collection with the new information. In this case, it is assumed
|
||||
that the incoming collection is a dictionary, currently behavior for
|
||||
working with a list in this manner is not defined and will raise an error.
|
||||
|
||||
**Collection** This is the name of the collection that is created from
|
||||
the data generated when the task is run. Examples of this would be
|
||||
form data that is generated from a UserTask or data that is generated
|
||||
from a script that is run. Currently the collection is built up to be
|
||||
a dictionary with a numeric key that corresponds to the place in the
|
||||
loopCardinality. For example, if we set the loopCardinality to be a
|
||||
list such as ['a','b','c] the resulting collection would be {1:'result
|
||||
from a',2:'result from b',3:'result from c'} - and this would be true
|
||||
even if it is a parallel MultiInstance where it was filled out in a
|
||||
different order.
|
||||
|
||||
**Element Variable** This is the variable name for the current
|
||||
iteration of the MultiInstance. In the case of the loopCardinality
|
||||
being just a number, this would be 1,2,3, . . . If the
|
||||
loopCardinality variable is mapped to a collection it would be either
|
||||
the list value from that position, or it would be the value from the
|
||||
dictionary where the keys are in sorted order. It is the content of the
|
||||
element variable that should be updated in the task.data. This content
|
||||
will then be added to the collection each time the task is completed.
|
||||
|
||||
Example:
|
||||
In a sequential MultiInstance, loop cardinality is ['a','b','c'] and elementVariable is 'myvar'
|
||||
then in the case of a sequential multiinstance the first call would
|
||||
have 'myvar':'a' in the first run of the task and 'myvar':'b' in the
|
||||
second.
|
||||
|
||||
Example:
|
||||
In a Parallel MultiInstance, Loop cardinality is a variable that contains
|
||||
{'a':'A','b':'B','c':'C'} and elementVariable is 'myvar' - when the multiinstance is ready, there
|
||||
will be 3 tasks. If we choose the second task, the task.data will
|
||||
contain 'myvar':'B'.
|
||||
|
|
|
@ -209,27 +209,20 @@ reach the event.
|
|||
Message Events
|
||||
^^^^^^^^^^^^^^
|
||||
|
||||
.. sidebar:: QA Lane
|
||||
|
||||
Ideally, this lane would be a process independent from the ordering process (we don't want
|
||||
it to be cancelled just because an order eventually completes). However, limitations of how
|
||||
SpiffWorkflow handles processes precludes multiple top-level processes.
|
||||
|
||||
In BPMN, Messages are used to communicate across processes and cannot be used within a
|
||||
workflow, but SpiffWorkflow allows message communication between lanes as well as between
|
||||
parent and child workflows. We'll use the first scenario in our example.
|
||||
|
||||
We've added a QA lane to out ordering process, whose job is investigating order order delays
|
||||
and recommending improvements. This portion of our process will only be started when an
|
||||
appropriate message is received.
|
||||
In BPMN, Messages are used to communicate across processes. Technically, Messages are not
|
||||
intended to be used inside a single process, but Spiff does support this use.
|
||||
|
||||
Messages are similar to signals, in that they are referenced by name, but they have the
|
||||
additional property that they may contain a payload.
|
||||
|
||||
We've added a QA process to our model, which will be initiated whenever an order takes to long
|
||||
to fulfill. We'll send the reason for the delay in the message.
|
||||
|
||||
.. note::
|
||||
|
||||
We currently depend on some Camunda-specific features in our implementation, but we
|
||||
intend to replace this with our own.
|
||||
This example depends on some Camunda-specific features in our implementation; there is
|
||||
an alternate messaging implementation in the Spiff extensions package, described in
|
||||
:doc:`spiff-extensions`.
|
||||
|
||||
.. figure:: figures/throw_message_event.png
|
||||
:scale: 30%
|
||||
|
|
Binary file not shown.
After Width: | Height: | Size: 29 KiB |
Binary file not shown.
After Width: | Height: | Size: 17 KiB |
Binary file not shown.
After Width: | Height: | Size: 242 KiB |
Binary file not shown.
After Width: | Height: | Size: 28 KiB |
Binary file not shown.
After Width: | Height: | Size: 38 KiB |
|
@ -59,6 +59,7 @@ Supported BPMN Elements
|
|||
organization
|
||||
events
|
||||
multiinstance
|
||||
spiff-extensions
|
||||
|
||||
Putting it All Together
|
||||
-----------------------
|
||||
|
|
|
@ -0,0 +1,112 @@
|
|||
Spiff Extensions
|
||||
================
|
||||
|
||||
BPMN Model
|
||||
----------
|
||||
|
||||
We'll be using the following files from `spiff-example-cli <https://github.com/sartography/spiff-example-cli>`_.
|
||||
|
||||
- `bpmn-spiff/events <https://github.com/sartography/spiff-example-cli/blob/master/bpmn-spiff/events.bpmn>`_ workflow
|
||||
- `bpmn-spiff/call activity <https://github.com/sartography/spiff-example-cli/blob/master/bpmn-spiff/call_activity.bpmn>`_ workflow
|
||||
- `product_prices <https://github.com/sartography/spiff-example-cli/blob/master/bpmn/product_prices.dmn>`_ DMN table
|
||||
- `shipping_costs <https://github.com/sartography/spiff-example-cli/blob/master/bpmn/shipping_costs.dmn>`_ DMN table
|
||||
|
||||
We'll also be using the `run-spiff.py <https://github.com/sartography/spiff-example-clie/blob/master/run-spiff.py>`_ script
|
||||
instead of the `run.py <https://github.com/sartography/spiff-example-clie/blob/master/run.py>`_ script
|
||||
|
||||
Camunda's BPMN editor does not handle data objects in the expected way. You can create data object
|
||||
references, but there is no way to re-use data objects.
|
||||
|
||||
It also does not support Message Correlations, and the inteface for generating a message payload doesn't work
|
||||
well in a Python environment.
|
||||
|
||||
We have extended BPMN.js to correct some of these issues. The examples in this section were created using our
|
||||
custom BPMN editor, `bpmn-js-spiffworkflow <https://github.com/sartography/bpmn-js-spiffworkflow>`_.
|
||||
|
||||
Data Objects
|
||||
^^^^^^^^^^^^
|
||||
|
||||
Data objects exist at a process level and are not visible in the diagram, but when you create a data object
|
||||
reference, you can choose what data object it points to.
|
||||
|
||||
.. figure:: figures/data_object_configuration.png
|
||||
:scale: 50%
|
||||
:align: center
|
||||
|
||||
Configuring a data object reference
|
||||
|
||||
When a data output association (a line) is drawn from a task to a data object reference, the value is copied
|
||||
from the task data to the workflow data and removed from the task. If a data input association is created from
|
||||
a data object reference, the value is temporarily copied into the task data while the task is being executed,
|
||||
and immediate removed afterwards.
|
||||
|
||||
This allows sensitive data to be removed from individual tasks (in our example, the customer's credit card
|
||||
number). It can also be used to prevent large objects from being repeatedly copied from task to task.
|
||||
|
||||
Multiple data object references can point to the same underlying data. In our example, we use to references
|
||||
to the same data object to pass the credit card info to both tasks that require it. On the right panel, we can
|
||||
see that only one data object exists in the process.
|
||||
|
||||
.. figure:: figures/data_objects.png
|
||||
:scale: 30%
|
||||
:align: center
|
||||
|
||||
Data objects in a process
|
||||
|
||||
If you step through this workflow, you'll see that the card number is not contained in the task data after
|
||||
the 'Enter Payment Info' has been completed.
|
||||
|
||||
Configuring Messages
|
||||
^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Messages are handled slightly differently in Spiff Message Events. On an Message Throw Event or Send Task,
|
||||
we define a payload, which is simply a bit of python code that will be evaluated against the task data and
|
||||
sent along with the message. In the corresponding Message Catch Event or Receive Task, we define a
|
||||
variable name where we'll store the result.
|
||||
|
||||
Spiff Messages can also optionally use correlation keys. The correlation key is an expression or set of
|
||||
expressions that are evaluated against a message payload to create an additional identifier for associating
|
||||
messages with processes.
|
||||
|
||||
In our example, it is possible that multiple QA processes could be started (the timer event will fire every
|
||||
minute until the order fulfillment process is complete). In this case, the message name is insufficient, as
|
||||
there will be multiple processes that can accept messages based on the name.
|
||||
|
||||
.. figure:: figures/correlation.png
|
||||
:scale: 50%
|
||||
:align: center
|
||||
|
||||
Defining a correlation key
|
||||
|
||||
We use the timestamp of the message creation as a unique key that can be used to distinguish between multiple
|
||||
QA processes.
|
||||
|
||||
.. figure:: figures/spiff_message_throw.png
|
||||
:scale: 50%
|
||||
:align: center
|
||||
|
||||
Configuring a message throw event
|
||||
|
||||
When we receive the event, we assign the payload to :code:`order_info`.
|
||||
|
||||
.. figure:: figures/spiff_message_catch.png
|
||||
:scale: 50%
|
||||
:align: center
|
||||
|
||||
Configuring a message catch event
|
||||
|
||||
The correlation is visible on both the Throw and Catch Events, but it is associated with the message rather
|
||||
than the tasks themselves; if you update the expression on either event, the changes will appear in both places.
|
||||
|
||||
Running The Model
|
||||
^^^^^^^^^^^^^^^^^
|
||||
|
||||
If you have set up our example repository, this model can be run with the
|
||||
following command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
./run-spiff.py -p order_product \
|
||||
-d bpmn/product_prices.dmn bpmn/shipping_costs.dmn \
|
||||
-b bpmn-spiffevents.bpmn bpmn-spiff/call_activity.bpmn
|
||||
|
|
@ -4,83 +4,112 @@ Putting it All Together
|
|||
In this section we'll be discussing the overall structure of the workflow
|
||||
runner we developed in `spiff-example-cli <https://github.com/sartography/spiff-example-cli>`_.
|
||||
|
||||
Our example application contains two different workflow runners, one that uses tasks with
|
||||
Camunda extensions
|
||||
(`run.py <https://github.com/sartography/spiff-example-cli/blob/main/run.py>`_) and one
|
||||
that uses tasks with Spiff extensions
|
||||
(`run-spiff.py <https://github.com/sartography/spiff-example-cli/blob/main/run.py>`_).
|
||||
|
||||
Most of the workflow operations will not change, so shared functions are defined in
|
||||
`utils.py <https://github.com/sartography/spiff-example-cli/blob/main/utils.py>`_.
|
||||
|
||||
The primary difference is handling user tasks. Spiff User Tasks define an extensions
|
||||
property that stores a filename containing a JSON schema used to define a web form. We
|
||||
use `react-jsonschema-form <https://react-jsonschema-form.readthedocs.io/en/latest/>`_
|
||||
to define our forms. This doesn't necessarily make a lot of sense in terms of a command
|
||||
line UI, so we'll focus on the Camunda workflow runner in this document.
|
||||
|
||||
Loading a Workflow
|
||||
-------------------
|
||||
|
||||
We'll need the following imports:
|
||||
The :code:`CamundaParser` extends the base :code:`BpmnParser`, adding functionality for
|
||||
parsing forms defined in Camunda User Tasks and decision tables defined in Camunda
|
||||
Business Rule Tasks. (There is a similar :code:`SpiffBpmnParser` used by the alternate
|
||||
runner.)
|
||||
|
||||
We create the parser and use it to load our workflow.
|
||||
|
||||
.. code:: python
|
||||
|
||||
from SpiffWorkflow.bpmn.workflow import BpmnWorkflow
|
||||
from SpiffWorkflow.camunda.parser.CamundaParser import CamundaParser
|
||||
from SpiffWorkflow.dmn.parser.BpmnDmnParser import BpmnDmnParser
|
||||
parser = CamundaParser()
|
||||
wf = parse_workflow(parser, args.process, args.bpmn, args.dmn)
|
||||
|
||||
from custom_script_engine import CustomScriptEngine
|
||||
|
||||
We need to create a parser. We could have imported :code:`BpmnParser`, which
|
||||
these parsers inherit from, but we need some additional features that the base
|
||||
parser does not provide.
|
||||
Our workflow parser looks like this;
|
||||
|
||||
.. code:: python
|
||||
|
||||
class Parser(BpmnDmnParser):
|
||||
OVERRIDE_PARSER_CLASSES = BpmnDmnParser.OVERRIDE_PARSER_CLASSES
|
||||
OVERRIDE_PARSER_CLASSES.update(CamundaParser.OVERRIDE_PARSER_CLASSES)
|
||||
|
||||
We'll use :code:`BpmnDmnParser` as our base class, because we would like the ability
|
||||
to use DMN tables in our application. The :code:`BpmnDmnParser` provides a task
|
||||
parser for Business Rule Tasks, which the underlying :code:`BpmnParser` it inherits from
|
||||
does not contain.
|
||||
|
||||
We also imported the :code:`CamundaParser` so that we can parse some Camunda
|
||||
specific features we'll use (forms in User Tasks). The :code:`CamundaParser` User
|
||||
Task parser will override the default parser.
|
||||
|
||||
In general, any task parser can be replaced with a custom parser of your
|
||||
own design if you have a BPMN modeller that produces XML not handled by the
|
||||
BPMN parsers in SpiffWorkflow.
|
||||
|
||||
.. code:: python
|
||||
|
||||
def parse(process, bpmn_files, dmn_files):
|
||||
parser = Parser()
|
||||
def parse_workflow(parser, process, bpmn_files, dmn_files, load_all=True):
|
||||
parser.add_bpmn_files(bpmn_files)
|
||||
if dmn_files:
|
||||
parser.add_dmn_files(dmn_files)
|
||||
top_level = parser.get_spec(process)
|
||||
subprocesses = parser.get_process_specs()
|
||||
if load_all:
|
||||
subprocesses = parser.find_all_specs()
|
||||
else:
|
||||
subprocesses = parser.get_subprocess_specs(process)
|
||||
return BpmnWorkflow(top_level, subprocesses, script_engine=CustomScriptEngine)
|
||||
|
||||
We create an instance of our previously defined parser, add the BPMN files to it, and
|
||||
optionally add any DMN files, if they were supplied.
|
||||
|
||||
We'll obtain the workflow specification from the parser for the top level process
|
||||
using :code:`parser.get_spec()`.
|
||||
|
||||
We'll get the specs of all the processes that were parsed with :code:`parser.get_process_specs()`
|
||||
and provide these to the workflow as well. If your entire workflow is contained in your
|
||||
top-level process, you can omit this argument, but if your workflow contains call activities,
|
||||
you'll need to include it.
|
||||
We have two options for finding subprocess specs. The method :code:`parser.find_all_specs()`
|
||||
will create specs for all executable processes found in every file supplied. The method
|
||||
:code:`parser.get_subprocess_specs(process)` will create specs only for processes used by
|
||||
the specified process. Both search recursively for subprocesses; the only difference is
|
||||
the latter method limits the search start to the specified process.
|
||||
|
||||
Our examples are pretty simple and we're not loading any extraneous stuff, so we'll
|
||||
just always load everything. If your entire workflow is contained in your top-level
|
||||
process, you can omit the :code:`subprocess` argument, but if your workflow contains
|
||||
call activities, you'll need to use one of these methods to find the models for any
|
||||
called processes.
|
||||
|
||||
We also provide an enhanced script engine to our workflow. More information about how and
|
||||
why you might want to do this is covered in :doc:`advanced`. The :code:`script_engine`
|
||||
argument is optional and the default will be used if none is supplied.
|
||||
|
||||
We return a :code:`BpmnWorkflow` based on the specs that uses the our custom script engine
|
||||
to execute script tasks and evaluate expressions.
|
||||
We return :code:`BpmnWorkflow` that runs our top-level workflow and contains specs for any
|
||||
subprocesses defined by that workflow.
|
||||
|
||||
Defining Task Handlers
|
||||
----------------------
|
||||
|
||||
In :code:`run.py`, we define the function :code:`complete_user_task`. This has code specific
|
||||
to Camunda User Task specs (in :code:`run-spiff.py`, we do something different).
|
||||
|
||||
We also import the shared function :code:`complete_manual_task` for handling Manual
|
||||
Tasks as there is no difference.
|
||||
|
||||
We create a mapping of task type to handler, which we'll pass to our workflow runner.
|
||||
|
||||
.. code:: python
|
||||
|
||||
handlers = {
|
||||
ManualTask: complete_manual_task,
|
||||
UserTask: complete_user_task,
|
||||
}
|
||||
|
||||
This might not be a step you would need to do in an application you build, since
|
||||
you would likely have only one set of task specs that need to be parsed, handled, and
|
||||
serialized; however our `run` method is an awful lot of code to maintain in two separate
|
||||
files.
|
||||
|
||||
Running a Workflow
|
||||
------------------
|
||||
|
||||
This is our application's :code:`run()` method.
|
||||
This is our application's :code:`run` method.
|
||||
|
||||
We pass our workflow, the task handlers, a serializer (creating a serializer is covered in
|
||||
more depth in :doc:`advanced`).
|
||||
|
||||
The :code:`step` argument is a boolean that indicates whether we want the option of seeing
|
||||
a more detailed representation of the state at each step, which we'll discuss in the
|
||||
section following this one.
|
||||
section following this one. The :code:`display_types` argument controls what types of
|
||||
tasks should be included in a detailed list when stepping through a process.
|
||||
|
||||
.. code:: python
|
||||
|
||||
def run(workflow, step):
|
||||
def run(workflow, task_handlers, serializer, step, display_types):
|
||||
|
||||
workflow.do_engine_steps()
|
||||
|
||||
|
@ -105,19 +134,15 @@ section following this one.
|
|||
dump.write(state)
|
||||
elif selected != '':
|
||||
next_task = options[selected]
|
||||
if isinstance(next_task.task_spec, UserTask):
|
||||
complete_user_task(next_task)
|
||||
next_task.complete()
|
||||
elif isinstance(next_task.task_spec, ManualTask):
|
||||
complete_manual_task(next_task)
|
||||
next_task.complete()
|
||||
else:
|
||||
next_task.complete()
|
||||
handler = task_handlers.get(type(next_task.task_spec))
|
||||
if handler is not None:
|
||||
handler(next_task)
|
||||
next_task.complete()
|
||||
|
||||
workflow.refresh_waiting_tasks()
|
||||
workflow.do_engine_steps()
|
||||
if step:
|
||||
print_state(workflow)
|
||||
print_state(workflow, next_task, display_types)
|
||||
|
||||
print('\nWorkflow Data')
|
||||
print(json.dumps(workflow.data, indent=2, separators=[ ', ', ': ' ]))
|
||||
|
@ -186,16 +211,14 @@ Here is the code we use for examining the workflow state.
|
|||
|
||||
.. code:: python
|
||||
|
||||
def print_state(workflow):
|
||||
def print_state(workflow, task, display_types):
|
||||
|
||||
task = workflow.last_task
|
||||
print('\nLast Task')
|
||||
print(format_task(task))
|
||||
print(json.dumps(task.data, indent=2, separators=[ ', ', ': ' ]))
|
||||
|
||||
display_types = (UserTask, ManualTask, ScriptTask, ThrowingEvent, CatchingEvent)
|
||||
all_tasks = [ task for task in workflow.get_tasks() if isinstance(task.task_spec, display_types) ]
|
||||
upcoming_tasks = [ task for task in all_tasks if task.state in [Task.READY, Task.WAITING] ]
|
||||
upcoming_tasks = [ task for task in all_tasks if task.state in [TaskState.READY, TaskState.WAITING] ]
|
||||
|
||||
print('\nUpcoming Tasks')
|
||||
for idx, task in enumerate(upcoming_tasks):
|
||||
|
@ -205,8 +228,7 @@ Here is the code we use for examining the workflow state.
|
|||
for idx, task in enumerate(all_tasks):
|
||||
print(format_task(task))
|
||||
|
||||
We can find out what the last task was with :code:`workflow.last_task`. We'll print
|
||||
its information as described above, as well as a dump of its data.
|
||||
We'll print information about our task as described above, as well as a dump of its data.
|
||||
|
||||
We can get a list of all tasks regardless of type or state with :code:`workflow.get_tasks()`.
|
||||
|
||||
|
@ -216,8 +238,3 @@ the tasks to only display the ones that would have salience to a user here.
|
|||
|
||||
We'll further filter those tasks for :code:`READY` and :code:`WAITING` tasks for a more
|
||||
compact display, and only show all tasks when explicitly called for.
|
||||
|
||||
This is a very simple application, so our interactions with tasks are very basic. You will
|
||||
definitely want to see the 'Navigation List' section of :doc:`advanced` for more sophisticated
|
||||
ways of managing workflow state.
|
||||
|
||||
|
|
Loading…
Reference in New Issue