mirror of
https://github.com/sartography/spiff-arena.git
synced 2025-01-12 18:44:14 +00:00
Proofing updates (#1838)
* update in place with python * split files into chunks * working chunking and updated quick start * edits * sanity check * give up on faq page, long docs work * debug * system prompt updates, etc * use temp file for output * refactor * remove dup import * generate diff file * check diff output to make sure it looks reasonable * add overall results * update script * update script * update script * edits * fix function --------- Co-authored-by: burnettk <burnettk@users.noreply.github.com>
This commit is contained in:
parent
986a28f5e4
commit
5df1262dca
@ -57,7 +57,7 @@ See full article [here](https://medium.com/@danfunk/understanding-bpmns-data-obj
|
||||
**1. Start Event**
|
||||
|
||||
|
||||
The first event in the minimal example is the start event.
|
||||
The first event in the Minimal Example is the start event.
|
||||
Each process diagram begins with a Start Event.
|
||||
Now explore the properties panel when you click on the first process of the diagram, “Start Event”.
|
||||
|
||||
@ -65,11 +65,11 @@ Now explore the properties panel when you click on the first process of the diag
|
||||
|
||||
General
|
||||
|
||||
- The Name for a Start Event is often left blank unless it needs to be named to provide more clarity on the flow or to be able to view this name in Process Instance logs.
|
||||
- The name for a Start Event is often left blank unless it needs to be named to provide more clarity on the flow or to be able to view this name in Process Instance logs.
|
||||
|
||||
- ID is automatically populated by the system (default behavior).
|
||||
However, it can be updated by the user, but it must remain unique within the process.
|
||||
Often the ID would be updated to allow easier referencing in messages and also Logs as long as it’s unique in the process.
|
||||
Often the ID would be updated to allow easier referencing in messages and also logs as long as it’s unique in the process.
|
||||
|
||||
|
||||
Documentation
|
||||
@ -79,7 +79,7 @@ Documentation
|
||||
|
||||
```{admonition} Note:
|
||||
|
||||
In the minimal example, the Start Event is a None Start Event.
|
||||
In the Minimal Example, the Start Event is a None Start Event.
|
||||
This type of Start Event signifies that the process can be initiated without any triggering message or timer event.
|
||||
It is worth noting that there are other types of Start Events available, such as Message Start Events and Timer Start Events.
|
||||
These advanced Start Events will be discussed in detail in the subsequent sections, providing further insights into their specific use cases and functionalities.
|
||||
@ -137,7 +137,7 @@ Now explore the properties panel when you click on the last end event process:
|
||||
|
||||
General
|
||||
|
||||
- The Name for a Start Event is often left blank unless it needs to be named to provide more clarity on the flow or to be able to view this name in Process Instance logs.
|
||||
- The name for a Start Event is often left blank unless it needs to be named to provide more clarity on the flow or to be able to view this name in Process Instance logs.
|
||||
|
||||
- ID is automatically populated by the system (default behavior).
|
||||
However, the user can update it, but it must remain unique within the process.
|
||||
@ -150,7 +150,7 @@ Documentation
|
||||
|
||||
Instructions
|
||||
|
||||
- These are the Instructions for the End User, which will be displayed when this task is executed.
|
||||
- These are the instructions for the end user, which will be displayed when this task is executed.
|
||||
You can click on launch editor to see the markdown file.
|
||||
|
||||
|
||||
@ -159,16 +159,16 @@ You can click on launch editor to see the markdown file.
|
||||
|
||||
## Essential Example
|
||||
|
||||
Now that we have explored the minimal example, let's delve into a more comprehensive BPMN model known as the Essential Example.
|
||||
Now that we have explored the Minimal Example, let's delve into a more comprehensive BPMN model known as the Essential Example.
|
||||
This example serves as a stepping stone towards a deeper understanding of BPMN, as it incorporates a variety of fundamental concepts that work harmoniously together.
|
||||
|
||||
### Access the Process Directory
|
||||
|
||||
Clicking on the process name will open the directory dedicated to the Essential Example process.
|
||||
Here are the four files in the Process:
|
||||
Here are the four files in the process:
|
||||
|
||||
**BPMN editor** : The BPMN editor is a primary file that runs the engine.
|
||||
In the minimal example, we learned that it allows you to visually design and represent business processes using the Business Process Model and Notation (BPMN) standard.
|
||||
**BPMN editor**: The BPMN editor is a primary file that runs the engine.
|
||||
In the Minimal Example, we learned that it allows you to visually design and represent business processes using the Business Process Model and Notation (BPMN) standard.
|
||||
|
||||
![image](images/BPMN_Editor.png)
|
||||
|
||||
@ -188,7 +188,8 @@ Here's what a DMN table looks like:
|
||||
|
||||
### Process Workflow
|
||||
|
||||
In this BPMN diagram example, the process is outlined step by step: The process initiates with a start event, serving as the entry point for the workflow.
|
||||
In this BPMN diagram example, the process is outlined step by step.
|
||||
The process initiates with a start event, serving as the entry point for the workflow.
|
||||
|
||||
Following the start event, a manual task named "Introduction" is incorporated, responsible for displaying a welcoming message to the user.
|
||||
|
||||
|
@ -42,7 +42,7 @@ The process begins with a Start Event, signaling the start of the workflow.
|
||||
It is followed by a Manual Task called "Introduction" that displays a welcome message or instructions for the users.
|
||||
The content to be displayed is specified in the task's properties panel.
|
||||
|
||||
![Image](images/Manu_instructions_panel.png)
|
||||
![Image](images/Manual_instructions_panel.png)
|
||||
|
||||
2. **User Task with Form**
|
||||
|
||||
@ -69,7 +69,7 @@ A Manual Task will display content based on the collected data and script-genera
|
||||
The instructions panel of the Manual Task provides the content to be displayed, which includes the form data entered by the user.
|
||||
It also offers an optional Chuck Norris joke based on user preference and a table of silly color names generated using Jinja templating.
|
||||
|
||||
![Image](images/Manual_instructionss.png)
|
||||
![Image](images/Manual_instructions_side_by_side.png)
|
||||
|
||||
5. **End Event**
|
||||
|
||||
|
@ -290,7 +290,6 @@ Here's how to use it:
|
||||
```
|
||||
|
||||
![Styling_Form](images/styling_forms.png)
|
||||
|
||||
#### Key Points:
|
||||
|
||||
- **Layout Design**: The `ui:layout` specifies that `firstName` and `lastName` should appear side by side. Each field's size adjusts according to the screen size (small, medium, large), utilizing grid columns for responsive design.
|
||||
@ -299,11 +298,11 @@ Here's how to use it:
|
||||
|
||||
#### Example Illustrated:
|
||||
|
||||
In this case, we are saying that we want the firstName and lastName in the same row, since they are both in the first element of the ui:layout array.
|
||||
We are saying that the firstName should take up 4 columns when a large display is used.
|
||||
The lastName also takes up 4 columns, so the two of them together fill up the whole row, which has 8 columns available for large displays.
|
||||
Medium displays have 5 columns available and small displays have 4.
|
||||
If you just specify a uiSchema like this, it will figure out the column widths for you:
|
||||
In this case, we are saying that we want the `firstName` and `lastName` in the same row, since they are both in the first element of the `ui:layout` array.
|
||||
We are saying that the `firstName` should take up 4 columns when a large display is used.
|
||||
The `lastName` also takes up 4 columns, so the two of them together fill up the whole row, which has 8 columns available for large displays.
|
||||
Medium displays have 5 columns available, and small displays have 4.
|
||||
If you just specify a `uiSchema` like this, it will figure out the column widths for you:
|
||||
|
||||
{
|
||||
"ui:layout": [
|
||||
@ -372,7 +371,7 @@ To incorporate the markdown widget into your rjsf form, follow these steps:
|
||||
"ui:widget": "markdown"
|
||||
```
|
||||
|
||||
![rsjf markdown](images/rsjf_markdown.png)
|
||||
![rjsf markdown](images/rsjf_markdown.png)
|
||||
|
||||
### Numeric Range Field
|
||||
|
||||
@ -471,7 +470,7 @@ To give the user feedback about how they are doing in terms of staying within th
|
||||
|
||||
#### JSON Schema Configuration
|
||||
|
||||
To do this, your json schema must contain a string with a maxLength, like this:
|
||||
To do this, your JSON schema must contain a string with a `maxLength`, like this:
|
||||
|
||||
```json
|
||||
{
|
||||
@ -488,7 +487,7 @@ To do this, your json schema must contain a string with a maxLength, like this:
|
||||
|
||||
#### UI Schema Configuration
|
||||
|
||||
Your UI Schema will need a ui:options specifying counter true, like this:
|
||||
Your UI Schema will need a `ui:options` specifying `counter: true`, like this:
|
||||
|
||||
```json
|
||||
{
|
||||
|
@ -15,7 +15,7 @@ It is a specialized event used to initiate error handling workflows dynamically.
|
||||
|
||||
**Reason to Use**:
|
||||
- **Modular Error Handling**: Separates error handling logic into dedicated subprocesses, improving process organization and maintainability.
|
||||
- **Reusability**: Allows for the reuse of error handling subprocesses across multiple parent processes.
|
||||
- **Reusability**: Allows for the reuse of error-handling subprocesses across multiple parent processes.
|
||||
- **Focused Recovery Strategies**: Enables the development of targeted recovery strategies for specific errors, enhancing error resolution effectiveness.
|
||||
|
||||
**Example**:
|
||||
@ -88,7 +88,8 @@ The process kicks off with an action that requires fetching specific employee de
|
||||
|
||||
- **Configuration**: This task is configured to make an HTTP GET request to the BambooHR API to retrieve employee information.
|
||||
- **Operator ID**: `http/getrequestV2`, indicating the operation type and version.
|
||||
- **Parameters**: The URL is set to `https://api.bamboohr.com/api/gateway.php/statusresearchdemo/v1/employees/113`, with headers accepting JSON, and parameters requesting the first and last names of the employee. Authentication is provided via basic auth, with a specified API key (`BAMBOOHR_API_KEY`) and password.
|
||||
- **Parameters**: The URL is set to `https://api.bamboohr.com/api/gateway.php/statusresearchdemo/v1/employees/113`, with headers accepting JSON, and parameters requesting the first and last names of the employee.
|
||||
Authentication is provided via basic auth, with a specified API key (`BAMBOOHR_API_KEY`) and password.
|
||||
- **Attempts**: Configured to retry the operation twice in case of failure.
|
||||
|
||||
3. **Error Handling Setup**:
|
||||
|
Before Width: | Height: | Size: 88 KiB After Width: | Height: | Size: 88 KiB |
Before Width: | Height: | Size: 53 KiB After Width: | Height: | Size: 53 KiB |
@ -4,7 +4,7 @@ A Message Event acts as a channel for the exchange of information between differ
|
||||
While it might be tempting to associate "message events" with emails, their scope extends beyond digital correspondence.
|
||||
They signify the transmission of information between various process components, whether within the same process or across different processes.
|
||||
|
||||
![message_relationship](images/relationship_message.png)
|
||||
![message_relationship](images/relationship_message.png)
|
||||
|
||||
**Reasons to Use a Message Event:**
|
||||
|
||||
@ -101,7 +101,7 @@ Three separate configurations need to be completed.
|
||||
| 💻 Form | ⌨ Field Input | 📝 Description |
|
||||
| ------------------------------------------------------------- | --- | --- |
|
||||
| ![name_field](images/name_field.png) | **Name:** Place Order | A descriptive name given to the element, providing a human-readable label or title. |
|
||||
| ![id_field](images/id_field.png) | **ID:** Example - send_invoice | An identifier used to uniquely identify the element within the BPMN model.|
|
||||
| ![id_field](images/id_field.png) | **ID:** Example - send_invoice | An identifier used to uniquely identify the element within the BPMN model. |
|
||||
| ![documentation_field](images/documentation_field.png) | **Element Documentation:** URL, Raw Data, Plain Text | Additional information or documentation related to the element, such as URLs, plain text, or raw data. |
|
||||
|
||||
**Collaboration:**
|
||||
@ -112,15 +112,15 @@ The identical configuration must be applied to every BPMN diagram if messages ex
|
||||
|
||||
| 💻 Form | ⌨ Field Input | 📝 Description |
|
||||
| --- | --- | --- |
|
||||
| ![correlation_keys](images/correlation_keys_1.png) | **Correlation Keys:** order | A correlation key is a unique identifier or attribute used to establish a connection or relationship between message events (it can be likened to the shared subject between them). It is possible to have multiple correlation keys for a process.
|
||||
| ![correlation_properties](images/correlation_properties_1.png) | **Correlation Properties:** invoice_number | The correlation property is what differentiates each key instance from another, and it's the defining attribute that sets them apart. For instance, if "order" is selected as the correlation key, a property like "invoice_number" could be used to distinguish each order instance from another. Keep in mind that this variable should be incorporated within the context of the process instance.|
|
||||
| ![correlation_keys](images/correlation_keys_1.png) | **Correlation Keys:** order | A correlation key is a unique identifier or attribute used to establish a connection or relationship between message events (it can be likened to the shared subject between them). It is possible to have multiple correlation keys for a process. |
|
||||
| ![correlation_properties](images/correlation_properties_1.png) | **Correlation Properties:** invoice_number | The correlation property is what differentiates each key instance from another, and it's the defining attribute that sets them apart. For instance, if "order" is selected as the correlation key, a property like "invoice_number" could be used to distinguish each order instance from another. Keep in mind that this variable should be incorporated within the context of the process instance. |
|
||||
| ![collaboration_messages](images/collaboration_messages_1.png) | **Messages:** order_approval, order_dispatch, etc. | Messages are established for each message pair (Throw and Catch Message Events). This setup will be utilized to configure the events, linking the associated occurrences together. |
|
||||
|
||||
**Throw Message Event:**
|
||||
|
||||
| 💻 Form | ⌨ Field Input | 📝 Description |
|
||||
| --- | --- | --- |
|
||||
| ![conditions](images/message_1.png) | **Message:** order_approval | This input isn't an open-text field; instead, it provides a dropdown list populated by the Messages configured in the preceding Collaboration section.|
|
||||
| ![conditions](images/message_1.png) | **Message:** order_approval | This input isn't an open-text field; instead, it provides a dropdown list populated by the Messages configured in the preceding Collaboration section. |
|
||||
| ![conditions](images/payload_msg.png) | **Payload:** order_amount | The Payload can include a variable, holding unique information specific to the instance, or in this scenario, the order. |
|
||||
| ![conditions](images/event_correlation_msg.png) | **Correlation:** invoice_number | Select the correlation that can identify the distinct property distinguishing one process instance from another. |
|
||||
|
||||
@ -130,6 +130,6 @@ The connected catch event is configured in precisely the same manner as the thro
|
||||
|
||||
| 💻 Form | ⌨ Field Input | 📝 Description |
|
||||
| --- | --- | --- |
|
||||
| ![conditions](images/message_1.png) | **Message:** order_approval | This input isn't an open-text field; instead, it consists of a dropdown list populated by the Messages configured in the preceding Collaboration section.|
|
||||
| ![conditions](images/message_1.png) | **Message:** order_approval | This input isn't an open-text field; instead, it consists of a dropdown list populated by the Messages configured in the preceding Collaboration section. |
|
||||
| ![conditions](images/payload_msg.png) | **Variable Name:** order_amount | The Variable Name can include a variable, holding unique information specific to the instance, or in this scenario, the order. |
|
||||
| ![conditions](images/event_correlation_msg.png) | **Correlation:** invoice_number | Select the correlation that can identify the distinct property distinguishing one process instance from another. |
|
||||
|
@ -16,7 +16,7 @@ When one instance is completed, a new instance is created for the next element i
|
||||
|
||||
All instances of the task are launched simultaneously, allowing for concurrent processing of the collection elements.
|
||||
In the case of a parallel multi-instance activity, all instances are created when the multi-instance body is activated.
|
||||
The instances are executed concurrently and independently from each other.
|
||||
The instances are executed concurrently and independently of each other.
|
||||
|
||||
![Multi_instance_parallel](images/multiinstance_parallel_example.png)
|
||||
|
||||
|
@ -4,7 +4,7 @@
|
||||
|
||||
This example illustrates the use of parallel gateways in BPMN models to manage and synchronize concurrent tasks effectively.
|
||||
|
||||
Parallel gateways are powerful BPMN elements that split a process flow into multiple independent flows, allowing for simultaneous execution of tasks, and later merging these flows to ensure coordination before moving forward.
|
||||
Parallel gateways are powerful BPMN elements that split a process flow into multiple independent flows, allowing for the simultaneous execution of tasks, and later merging these flows to ensure coordination before moving forward.
|
||||
|
||||
### Process Steps
|
||||
|
||||
@ -13,7 +13,7 @@ Parallel gateways are powerful BPMN elements that split a process flow into mult
|
||||
|
||||
2. **Sequence Flow 1: Script Task [y = 1]**: Assigns the value `1` to variable `y`. This script task initializes variable `y` and demonstrates setting a simple variable in one branch of the parallel flow.
|
||||
|
||||
3. **Sequence Flow 2: Script Task [z = 2]**: Similar to the first script task, this operation assigns the value `2` to variable `z`. Functions concurrently with the first script task, demonstrating the capability of the BPMN model to handle multiple operations at the same time.
|
||||
3. **Sequence Flow 2: Script Task [z = 2]**: Similar to the first script task, this operation assigns the value `2` to variable `z`. It functions concurrently with the first script task, demonstrating the capability of the BPMN model to handle multiple operations at the same time.
|
||||
|
||||
4. **Parallel Gateway (Merge)**: This merging gateway reunites the divergent paths after independent task completion, ensuring all parallel processes are complete before moving forward. It acts as a critical checkpoint in the process to synchronize and consolidate data from multiple sources, ensuring that no task moves forward until all parallel tasks have concluded.
|
||||
|
||||
|
@ -15,13 +15,14 @@ A process can have one or more Pools, each with one or more Lanes.
|
||||
|
||||
## Pools
|
||||
|
||||
A Pool can be configured as an "Empty Pool" (collapsed) or an "Expanded Pool".
|
||||
A Pool can be configured as an "Empty Pool" (collapsed) or an "Expanded Pool."
|
||||
You can choose the desired configuration 🔧 from the element's options after dragging it onto your diagram.
|
||||
|
||||
![pools_and_lanes](images/pools_and_lanes_1.png)
|
||||
|
||||
Empty Pools are used to represent role players in cases where a specific process is neither known nor required, but the interaction points remain valuable.
|
||||
They serve to illustrate the engagement of certain entities without detailing their internal processes, for example, we don't know a customer's specific process but it matters when we interact with them to complete our process.
|
||||
They serve to illustrate the engagement of certain entities without detailing their internal processes.
|
||||
For example, we don't know a customer's specific process, but it matters when we interact with them to complete our process.
|
||||
|
||||
Conversely, Expanded Pools are employed when the processes are known and hold relevance within the diagram's context.
|
||||
|
||||
@ -42,7 +43,7 @@ However, if a process doesn't logically fit within the same Pool, like those for
|
||||
|
||||
**Collapsed (Empty) Pool configuration:**
|
||||
|
||||
Configuring an "Empty Pool" (collapsed) to represent an external entity such as a customer.
|
||||
Configuring an "Empty Pool" (collapsed) to represent an external entity, such as a customer.
|
||||
|
||||
| 💻 Form | ⌨ Field Input | 📝 Description |
|
||||
| --- | --- | --- |
|
||||
|
@ -119,12 +119,13 @@ Just remember to have a mechanism in place to eventually break out of the loop a
|
||||
|
||||
## Timer Event Configuration
|
||||
|
||||
| 💻 Form | ⌨ Field Input | 📝 Description |
|
||||
| --- | --- | --- |
|
||||
| ![name_field](images/name_field.png) | **Name:** Cancel Order | A descriptive name given to the element, providing a human-readable label or title. |
|
||||
| ![id_field](images/id_field.png) | **ID:** Example - cancel_order | An identifier used to uniquely identify the element within the BPMN model. |
|
||||
| 💻 Form | ⌨ Field Input | 📝 Description |
|
||||
| -------------------------------------- | ----------------------------------- | ------------------------------------------------------------------------------------------- |
|
||||
| ![name_field](images/name_field.png) | **Name:** Cancel Order | A descriptive name given to the element, providing a human-readable label or title. |
|
||||
| ![id_field](images/id_field.png) | **ID:** Example - cancel_order | An identifier used to uniquely identify the element within the BPMN model. |
|
||||
| ![timer_field](images/timer_field.png) | **Type:** Duration **Value:** PT48H | Choose the type of trigger you want to set: Specific Date/Time, Duration, or Cycle Trigger. |
|
||||
|
||||
```{admonition} Timer Delay
|
||||
💡 Note: Timer events, especially those set for short durations, may face delays of 20-30 seconds, varying with the number of active instances.
|
||||
Despite significant improvements, our ongoing efforts aim to further reduce these delays.
|
||||
```
|
||||
|
@ -8,7 +8,8 @@ This documentation outlines the process of creating and managing sensitive data
|
||||
### Process Breakdown
|
||||
|
||||
#### 1. Identifying Sensitive Data
|
||||
- Determine what constitutes sensitive data within your workflow. This could include personal information, financial details, or confidential business information.
|
||||
- Determine what constitutes sensitive data within your workflow.
|
||||
This could include personal information, financial details, or confidential business information.
|
||||
|
||||
#### 2. Data Object Creation and Script Task Integration
|
||||
|
||||
|
@ -24,7 +24,7 @@ Secrets are only used in service tasks.
|
||||
|
||||
### Adding a New Secret
|
||||
|
||||
1. **Navigate to the Configuration Section**: Go to the configuration section from the top panel and click on "Add a secret". Ensure you have admin access to SpiffArena.
|
||||
1. **Navigate to the Configuration Section**: Go to the configuration section from the top panel and click on "Add a secret." Ensure you have admin access to SpiffArena.
|
||||
![Configuration Section](images/Secrets_step_1.png)
|
||||
|
||||
2. **Add New Secret**: Create a new secret by entering a key and its corresponding value. Once saved, the value will be encrypted.
|
||||
|
@ -99,8 +99,13 @@ The new Process Groups tile will be available under the Process Groups view.
|
||||
- Identifier: Enter a unique identifier for the process model.
|
||||
- Description: Provide a brief description of the process model, outlining its purpose or functionality.
|
||||
- Notification Type: Specify the type of notification related to the process model.
|
||||
- Notification Addresses: Enter the addresses or destinations where notifications should be sent in the event that a process instance encounters an error. You do not need to worry about setting these values unless you are interested in custom {ref}`process_error_handling`.
|
||||
- Metadata Extraction Path: You can provide one or more metadata extractions to uplift data from your process instances to provide quick access in searches and perspectives. Specify the key and path/location where metadata extraction should occur within the process model. For example, if you have a script task that runs the statement `great_color = "blue"`, then you would set extraction path to `great_color`. You would probably also set the extraction key to `great_color`. But if you wanted to, you could just call it `color`, assuming you wanted that to be the name used in reports, etc.
|
||||
- Notification Addresses: Enter the addresses or destinations where notifications should be sent in the event that a process instance encounters an error.
|
||||
You do not need to worry about setting these values unless you are interested in custom {ref}`process_error_handling`.
|
||||
- Metadata Extraction Path: You can provide one or more metadata extractions to uplift data from your process instances to provide quick access in searches and perspectives.
|
||||
Specify the key and path/location where metadata extraction should occur within the process model.
|
||||
For example, if you have a script task that runs the statement `great_color = "blue"`, then you would set the extraction path to `great_color`.
|
||||
You would probably also set the extraction key to `great_color`.
|
||||
But if you wanted to, you could just call it `color`, assuming you wanted that to be the name used in reports, etc.
|
||||
|
||||
Make sure to accurately fill in all the required fields in the Process Model form to ensure proper configuration and functionality.
|
||||
|
||||
@ -137,7 +142,8 @@ Look for common patterns or similarities in their job functions and tasks relate
|
||||
Add a user email under the users 'column' and the group name under 'groups' and don't forget to add double quotes.
|
||||
|
||||
```{admonition} Note
|
||||
Based on DMN functionality, leaving the "*" column empty means that all rules ('When') will be triggered without specifying a condition. Read more about DMN tables to understand how the rules engine can be utilized for many different scenarios.
|
||||
Based on DMN functionality, leaving the "*" column empty means that all rules ('When') will be triggered without specifying a condition.
|
||||
Read more about DMN tables to understand how the rules engine can be utilized for many different scenarios.
|
||||
```
|
||||
|
||||
![user_to_groups](images/user_to_groups.png)
|
||||
@ -146,15 +152,23 @@ Based on DMN functionality, leaving the "*" column empty means that all rules ('
|
||||
|
||||
Now that the groups have been identified, their permissions can be set by adding the group name under the "permissions_group" column.
|
||||
|
||||
- To determine a user's capabilities within the permissible scope, you can define specific permissions. These permissions can be combined in a sequence if multiple apply to a particular rule. For instance, ["read", "start"] indicates that the user can perform both reading and starting actions. Alternatively, [All] can be employed to grant unrestricted access.
|
||||
- The hit policy is set to "Collect" which means that all conditions that are true will be applied. [Read more about DMN tables and hit policies here.](../Building_Diagrams/dmn.md)
|
||||
- The permission URL can be configured to define the user's access privileges. Our objective is to streamline the process by minimizing the necessity of being familiar with the complete set of permission URLs. In most instances, utilizing BASIC and ELEVATED permissions, as well as PM/PG, should be sufficient. However, it is also feasible to directly incorporate any API URL into the permissions.
|
||||
- To determine a user's capabilities within the permissible scope, you can define specific permissions.
|
||||
These permissions can be combined in a sequence if multiple apply to a particular rule.
|
||||
For instance, ["read", "start"] indicates that the user can perform both reading and starting actions.
|
||||
Alternatively, [All] can be employed to grant unrestricted access.
|
||||
- The hit policy is set to "Collect" which means that all conditions that are true will be applied.
|
||||
[Read more about DMN tables and hit policies here.](../Building_Diagrams/dmn.md)
|
||||
- The permission URL can be configured to define the user's access privileges.
|
||||
Our objective is to streamline the process by minimizing the necessity of being familiar with the complete set of permission URLs.
|
||||
In most instances, utilizing BASIC and ELEVATED permissions, as well as PM/PG, should be sufficient.
|
||||
However, it is also feasible to directly incorporate any API URL into the permissions.
|
||||
|
||||
In truth, what you are doing is writing an expression.
|
||||
In this case, it would read that if the variable 'permissions_group' type string is equal to 'permissions' variable of type string then set the 'permission_url' equal to the associated value.
|
||||
|
||||
```{admonition} Note
|
||||
If you find coding more familiar and preferable to constructing DMN tables, you may notice similarities between this DMN table and the shared permission configuration file. This similarity can help clarify or make it easier for you to understand the DMN table structure and its relation to the permission configuration.
|
||||
If you find coding more familiar and preferable to constructing DMN tables, you may notice similarities between this DMN table and the shared permission configuration file.
|
||||
This similarity can help clarify or make it easier for you to understand the DMN table structure and its relation to the permission configuration.
|
||||
```
|
||||
|
||||
![group_permission](images/group_permission.png)
|
||||
|
@ -3,7 +3,7 @@
|
||||
## Setting the Environment Variable
|
||||
|
||||
Once a `Connector Proxy` has been deployed, to integrate it with SpiffArena, we simply need to update an environment variable and restart the backend.
|
||||
If you're using the [Getting Started Guide](https://www.spiffworkflow.org/posts/articles/get_started/), open the docker-compose.yml file, otherwise edit the environment variable in the way that is appropriate for your deployment.
|
||||
If you're using the [Getting Started Guide](https://www.spiffworkflow.org/posts/articles/get_started/), open the docker-compose.yml file; otherwise, edit the environment variable in the way that is appropriate for your deployment.
|
||||
The variable we need to change is called `SPIFFWORKFLOW_BACKEND_CONNECTOR_PROXY_URL`.
|
||||
|
||||
Here's an example diff using the function URL from the AWS tutorial:
|
||||
|
@ -3,9 +3,9 @@
|
||||
The minimal deployment is to mimic the docker-compose.yml file at the root of spiff-arena.
|
||||
Steps for a more hardened production setup after that baseline include:
|
||||
|
||||
1. setting up a MySQL or PostgreSQL database for Backend persistence (instead of sqlite)
|
||||
2. setting up a Redis/Valkey or RabbitMQ server for a Celery broker.
|
||||
3. separating out the Backend deployment into three deployments, 1) API, 2) Background, and 3) Celery worker.
|
||||
1. Setting up a MySQL or PostgreSQL database for Backend persistence (instead of SQLite)
|
||||
2. Setting up a Redis/Valkey or RabbitMQ server for a Celery broker.
|
||||
3. Separating out the Backend deployment into three deployments: 1) API, 2) Background, and 3) Celery worker.
|
||||
|
||||
```mermaid
|
||||
graph TD;
|
||||
|
@ -1,23 +1,23 @@
|
||||
# Path-based Routing
|
||||
|
||||
If you are using frontend, frontend and backend need to share cookies.
|
||||
Backend, in particular, sets a cookie, and frontend needs to read it.
|
||||
As such, you cannot run frontend and backend on different subdomains, like this:
|
||||
If you are using the frontend, the frontend and backend need to share cookies.
|
||||
The backend, in particular, sets a cookie, and the frontend needs to read it.
|
||||
As such, you cannot run the frontend and backend on different subdomains, like this:
|
||||
|
||||
- frontend.example.com
|
||||
- backend.example.com
|
||||
|
||||
Instead, we often run them like this:
|
||||
|
||||
- example.com for frontend
|
||||
- api.example.com for backend
|
||||
- example.com for the frontend
|
||||
- api.example.com for the backend
|
||||
|
||||
This works since the backend can set a cookie for the entire domain, and the frontend can read it.
|
||||
|
||||
Another alternative that works well is to run them on the same host but with different paths, like this:
|
||||
|
||||
- spiff.example.com for frontend
|
||||
- spiff.example.com/api for backend
|
||||
- spiff.example.com for the frontend
|
||||
- spiff.example.com/api for the backend
|
||||
|
||||
To accomplish this path-based routing scenario, set environment variables like this in the frontend:
|
||||
|
||||
|
@ -29,7 +29,7 @@ Editing process models locally is another perfectly good option, depending on yo
|
||||
SPIFFWORKFLOW_BACKEND_GIT_COMMIT_ON_SAVE=true
|
||||
SPIFFWORKFLOW_BACKEND_GIT_USERNAME=automation-user
|
||||
SPIFFWORKFLOW_BACKEND_GIT_USER_EMAIL=automation-user@example.com
|
||||
SPIFFWORKFLOW_BACKEND_GIT_SOURCE_BRANCH=sandbox # this branch will get pushes with your commits
|
||||
SPIFFWORKFLOW_BACKEND_GIT_SOURCE_BRANCH=sandbox # this branch will get pushed with your commits
|
||||
SPIFFWORKFLOW_BACKEND_GIT_SSH_PRIVATE_KEY_PATH=/path/to/your/private/key
|
||||
SPIFFWORKFLOW_BACKEND_GIT_SSH_PRIVATE_KEY="alternate way of providing the key as a multiline string"
|
||||
```
|
||||
@ -58,7 +58,7 @@ This can be particularly helpful if you have long-lived process instances.
|
||||
|
||||
## Process Model Promotion Strategy
|
||||
|
||||
Probably the best way of promoting models is to do work on a specific branch, and then merge that branch into the next branch (maybe staging if you are actively changing dev).
|
||||
Probably the best way of promoting models is to do work on a specific branch and then merge that branch into the next branch (maybe staging if you are actively changing dev).
|
||||
It is also possible to promote models piecemeal.
|
||||
This is activated via `SPIFFWORKFLOW_BACKEND_GIT_PUBLISH_TARGET_BRANCH`, which is for a specific use case where you want to publish specific process models from the source branch to a target Git branch, rather than promoting the entire Git branch.
|
||||
A publish function appears in the UI when `SPIFFWORKFLOW_BACKEND_GIT_PUBLISH_TARGET_BRANCH` is set.
|
||||
|
@ -34,7 +34,7 @@ If you want to get the results of this job after the worker processes it, you wo
|
||||
redis-cli get celery-task-meta-9622ff55-9f23-4a94-b4a0-4e0a615a8d14
|
||||
```
|
||||
|
||||
As such, if you wanted to get ALL of the results, you could use a hilarious command like:
|
||||
As such, if you wanted to get ALL of the results, you could use a command like:
|
||||
|
||||
```sh
|
||||
echo 'keys celery-task-meta-\*' | redis-cli | sed 's/^/get /' | redis-cli
|
||||
|
@ -6,7 +6,6 @@
|
||||
👇 Throughout this step-by-step guide, we will walk you through key components of SpiffWorkflow, ensuring that you have a clear understanding of how to use the platform effectively.
|
||||
```
|
||||
|
||||
|
||||
## 🚀 Getting Started with SpiffArena
|
||||
|
||||
Sartography, the company that shepherds the SpiffWorkflow and SpiffArena projects, provides users with a platform to explore workflow concepts through a collection of examples, diagrams, and workflows.
|
||||
@ -20,12 +19,12 @@ Users can interact with pre-built models, make modifications, and visualize proc
|
||||
:width: 230px
|
||||
:align: right
|
||||
```
|
||||
|
||||
To begin your journey with SpiffArena, open your web browser and navigate to the SpiffArena website (currently spiffdemo.org).
|
||||
|
||||
On the login screen, you will find the option to log in using Single Sign-On.
|
||||
Click the Single Sign-On button and select your preferred login method, such as using your Gmail account.
|
||||
|
||||
|
||||
```{admonition} Note:
|
||||
Stay tuned as we expand our sign-on options beyond Gmail.
|
||||
More ways to access SpiffArena are coming your way!
|
||||
@ -33,28 +32,28 @@ More ways to access SpiffArena are coming your way!
|
||||
|
||||
## How to Navigate through SpiffArena
|
||||
|
||||
In this section, we will navigate through the platform and provide a generic overview of each section step-by-step, allowing you to understand and engage with the platform more effectively.
|
||||
In this section, we will navigate through the platform and provide a general overview of each section step-by-step, allowing you to understand and engage with the platform more effectively.
|
||||
|
||||
### Step 1: Explore the Home section
|
||||
|
||||
Once you are signed in, you can start exploring the home page.
|
||||
The home page has three tab sections: **In Progress**, **Completed** and **Start New**.
|
||||
The home page has three tab sections: **In Progress**, **Completed**, and **Start New**.
|
||||
|
||||
![Untitled](images/Untitled_2.png)
|
||||
|
||||
|
||||
- The "In Progress" section provides an overview of all ongoing process instances, including those initiated by you, those awaiting your action, or those awaiting action from a team you are a member of (Optional).
|
||||
- The "Completed" section allows you to view all completed process instances, including those initiated by you, those initiated by other SpiffWorkflow users with tasks completed by you and if applicable, those with tasks completed by a group of which you are a member.
|
||||
- The "Completed" section allows you to view all completed process instances, including those initiated by you, those initiated by other SpiffWorkflow users with tasks completed by you, and, if applicable, those with tasks completed by a group of which you are a member.
|
||||
- The “Start New” section displays the processes you are permitted to start according to your role.
|
||||
|
||||
```{admonition} Key terms
|
||||
:class: info
|
||||
💡 **Process:** A process is a sequence of tasks that must be completed to achieve a specific goal.
|
||||
|
||||
**Instance:** An instance, on the other hand, represents a specific occurrence of a process. Each instance has its own set of data and state that is updated as the instance progresses through the workflow.
|
||||
**Instance:** An instance, on the other hand, represents a specific occurrence of a process.
|
||||
Each instance has its own set of data and state that is updated as the instance progresses through the workflow.
|
||||
```
|
||||
|
||||
If you are a member of a team, you may also have one or more Instances with tasks waiting for [team name] lists as well.
|
||||
If you are a member of a team, you may also have one or more instances with tasks waiting for [team name] lists as well.
|
||||
|
||||
![Untitled](images/Untitled_3.png)
|
||||
|
||||
@ -63,7 +62,7 @@ If you are a member of a team, you may also have one or more Instances with task
|
||||
The process section provides a comprehensive view of the process ecosystem by showcasing process groups and process models.
|
||||
|
||||
```{admonition} Process Groups
|
||||
A **process group** is a way of grouping a bunch of **process models** and a **process model** contains all the files necessary to execute a specific process.
|
||||
A **process group** is a way of grouping a bunch of **process models**, and a **process model** contains all the files necessary to execute a specific process.
|
||||
```
|
||||
|
||||
![Untitled](images/Untitled_4.png)
|
||||
@ -86,7 +85,7 @@ Feel free to ask questions about the platform's features or how to get started.
|
||||
With SpiffWorkflow, you can easily initiate a new process instance.
|
||||
Here's a step-by-step guide on how to start a process.
|
||||
|
||||
### Step 1: Sign in and navigate to Home section
|
||||
### Step 1: Sign in and navigate to the Home section
|
||||
|
||||
The first thing you need to do is sign in to your account on SpiffWorkflow.
|
||||
Once you're signed in, you'll see three tabs in the Home section: In progress, Completed, and Start New.
|
||||
@ -109,7 +108,7 @@ Choose the process you want to initiate and click “Start”.
|
||||
|
||||
You have successfully started a new process instance in SpiffWorkflow.
|
||||
|
||||
If a process model doesn't have an associated BPMN file, the system will not display a start button.
|
||||
If a process model doesn't have an associated BPMN file, the system will not display a start button.
|
||||
This is to prevent confusion and errors that might arise from attempting to start an incomplete process model.
|
||||
|
||||
---
|
||||
@ -129,7 +128,7 @@ There will be three types of instances shown:
|
||||
|
||||
- **Started by me:** This section shows a list of process instances that were started by you, providing you with an overview of the instances you have initiated.
|
||||
- **Waiting for me:** This section displays a list of process instances with tasks assigned to you and are currently waiting for you to respond to.
|
||||
- **Waiting for [team name]:** If you are a member of SpiffWorkflow**,** this section displays a list of process instances with tasks assigned to a group you are a member of and currently waiting for someone in that group to complete them.
|
||||
- **Waiting for [team name]:** If you are a member of SpiffWorkflow, this section displays a list of process instances with tasks assigned to a group you are a member of and currently waiting for someone in that group to complete them.
|
||||
|
||||
![Untitled](images/Untitled_8.png)
|
||||
|
||||
@ -142,7 +141,7 @@ In the case of new users who haven't started or been part of any process or been
|
||||
Once you have identified the request you need to respond to, simply click on the 'Go' button in the action column to open it.
|
||||
Upon opening the process instance, you can respond to the request based on the requirements of that task.
|
||||
|
||||
Depending on the task requirements, this may involve submitting additional information, reviewing the task or any other action item.
|
||||
Depending on the task requirements, this may involve submitting additional information, reviewing the task, or any other action item.
|
||||
|
||||
![Untitled](images/Untitled_10.png)
|
||||
|
||||
@ -161,7 +160,7 @@ Here's how you can view the steps of the process you just started.
|
||||
|
||||
### Step 1: Navigate to the “Home” or “Process Instance” section
|
||||
|
||||
There are 2 ways of finding your process instances.
|
||||
There are two ways of finding your process instances.
|
||||
|
||||
Option 1: Once you're signed in, navigate to the home section.
|
||||
Here you will find a list of all the process instances you've initiated.
|
||||
@ -184,13 +183,11 @@ The grey represents the path which was taken by the current process steps.
|
||||
|
||||
By following these steps, you can easily view the steps of the process you initiated and keep track of progress.
|
||||
|
||||
|
||||
|
||||
---
|
||||
|
||||
## How to view the Process-defined metadata for a process instance
|
||||
|
||||
The Process-defined **metadata can provide valuable insights into its history, current status, and other important details that is specifically created and used within a particular process.
|
||||
The Process-defined **metadata can provide valuable insights into its history, current status, and other important details that are specifically created and used within a particular process**.
|
||||
With the SpiffWorkflow platform, users can easily view the metadata for a process instance.
|
||||
|
||||
To check the metadata of a process instance, follow these steps.
|
||||
@ -268,8 +265,9 @@ Within the "Process Instances" section, you'll see a list of all the instances f
|
||||
|
||||
![Untitled](images/Untitled_19.png)
|
||||
|
||||
If you are on a homepage, you can navigate to the table you wish to filter.
|
||||
Look for the black funnel icon in the top right-hand corner above the table and click on the Icon: By clicking on the filter icon, you'll be taken to a full-screen process view.
|
||||
If you are on a home page, you can navigate to the table you wish to filter.
|
||||
Look for the black funnel icon in the top right-hand corner above the table and click on the icon.
|
||||
By clicking on the filter icon, you'll be taken to a full-screen process view.
|
||||
|
||||
![Filter Icon](images/Filter_icon.png)
|
||||
|
||||
@ -288,18 +286,16 @@ Once you have entered all the relevant filter details, click on the "**Apply**"
|
||||
The system will then display all the process instances matching the input details.
|
||||
|
||||
![Untitled](images/Untitled_21.png)
|
||||
|
||||
To filter process instances by **process-defined metadata**, follow these steps:
|
||||
|
||||
- Search for the specific **process** you want to filter and click on the column option to select metadata options.
|
||||
|
||||
![Untitled](images/Untitled_22.png)
|
||||
|
||||
- The metadata fields will be displayed in the dropdown. Select the field you want to display and click on "**Save**" to apply the changes.
|
||||
- The metadata fields will be displayed in the dropdown.
|
||||
Select the field you want to display and click on "**Save**" to apply the changes.
|
||||
|
||||
![Untitled](images/Untitled_23.png)
|
||||
|
||||
- After saving the details, the newly created column will be displayed. Finally click on “**Apply“** button to reflect the changes.
|
||||
- After saving the details, the newly created column will be displayed.
|
||||
Finally, click on the “**Apply**” button to reflect the changes.
|
||||
|
||||
![Untitled](images/Untitled_24.png)
|
||||
|
||||
@ -310,7 +306,7 @@ If you wish to save the perspectives, click on the "**Save**" button.
|
||||
![Untitled](images/Untitled_25.png)
|
||||
|
||||
A prompt will appear, allowing you to provide a name for the identifier associated with the saved filter.
|
||||
Enter a descriptive name for the filter identifier and “**Save”** changes.
|
||||
Enter a descriptive name for the filter identifier and “**Save**” changes.
|
||||
Now you can search for specific processes using Process Instance Perspectives.
|
||||
|
||||
![Untitled](images/Untitled_26.png)
|
||||
@ -332,9 +328,9 @@ This can help you manage your workflows more efficiently and keep track of the p
|
||||
|
||||
## How to Interpret Colors in a BPMN Diagram
|
||||
|
||||
One of the key features of BPMN diagrams in SpiffWorkflow is the use of colors to represent different states or statuses of process instances.
|
||||
One of the key features of BPMN diagrams in SpiffWorkflow is the use of colors to represent different states or statuses of process instances.
|
||||
|
||||
Here are the colors used in BPMN Process:
|
||||
Here are the colors used in BPMN Process:
|
||||
|
||||
1. **Grey Color:**
|
||||
- **Meaning:** The task is completed.
|
||||
@ -344,36 +340,38 @@ This can help you manage your workflows more efficiently and keep track of the p
|
||||
|
||||
2. **Yellow Color:**
|
||||
- **Meaning:** The process instance has started and is currently in progress.
|
||||
- **Implication:** This color signifies that the task is active and ongoing. It may require monitoring or further inputs to proceed.
|
||||
|
||||
![Colors](images/Yellow.png)
|
||||
- **Implication:** This color signifies that the task is active and ongoing.
|
||||
It may require monitoring or further inputs to proceed.
|
||||
|
||||
![Colors](images/Yellow.png)
|
||||
|
||||
3. **Red/Pink Color:**
|
||||
- **Meaning:** Indicates errors in the task.
|
||||
- **Implication:** There might be issues or obstacles preventing the task from proceeding as expected. Immediate attention and troubleshooting may be required.
|
||||
- **Implication:** There might be issues or obstacles preventing the task from proceeding as expected.
|
||||
Immediate attention and troubleshooting may be required.
|
||||
|
||||
![Colors](images/Red.png)
|
||||
![Colors](images/Red.png)
|
||||
|
||||
4. **Purple Color:**
|
||||
- **Meaning:** The activity has been cancelled.
|
||||
- **Implication:** This task was intentionally stopped before completion. This could be due to time constraints, external triggers, or other predefined conditions that have been set as boundary events.
|
||||
|
||||
![Colors](images/Purple.png)
|
||||
- **Meaning:** The activity has been canceled.
|
||||
- **Implication:** This task was intentionally stopped before completion.
|
||||
This could be due to time constraints, external triggers, or other predefined conditions that have been set as boundary events.
|
||||
|
||||
![Colors](images/Purple.png)
|
||||
|
||||
---
|
||||
## How to check Milestones and Events
|
||||
## How to Check Milestones and Events
|
||||
|
||||
### Milestones
|
||||
A milestone is a specific point in a process that signifies a significant event or state. It provides a high-level overview of the progress made in the process.
|
||||
|
||||
A milestone is a specific point in a process that signifies a significant event or state.
|
||||
It provides a high-level overview of the progress made in the process.
|
||||
|
||||
![Milestones](images/Milestone_Screenshot.png)
|
||||
|
||||
In BPMN, if you draw an intermediate event and do not specify its type (like message, signal, start, or end) but give it a name, it becomes a milestone.
|
||||
Essentially, a milestone is an event that hasn't been set to something specific.
|
||||
|
||||
|
||||
### Events
|
||||
|
||||
Events provide a detailed log of everything that happens in a process.
|
||||
@ -385,7 +383,7 @@ The events tab provides a detailed log of all the tasks and their execution time
|
||||
It can be noisy due to the granularity of the information, but it's essential for understanding the intricacies of the process.
|
||||
|
||||
---
|
||||
## How to check messages
|
||||
## How to Check Messages
|
||||
|
||||
Messages in BPMN allow processes to communicate with each other.
|
||||
This communication can take various forms:
|
||||
@ -412,7 +410,6 @@ To explain the concept, we are using a relatable example involving two processes
|
||||
1. The chef starts by receiving the order message from the waiter.
|
||||
2. After preparing the meal, the chef sends a message back to the waiter, signaling that the order is ready.
|
||||
|
||||
|
||||
### Setting Up the Processes
|
||||
|
||||
The setup involves creating two process models named "Chef" and "Waiter."
|
||||
@ -424,11 +421,13 @@ The chef's process starts by listening for the order message, preparing the meal
|
||||
One of the complexities in BPMN messaging is ensuring that the right processes are communicating with each other, especially when multiple instances are running.
|
||||
This is achieved using correlation keys and properties.
|
||||
|
||||
![corelation](images/Corelation.png)
|
||||
![correlation](images/Corelation.png)
|
||||
|
||||
- **Correlation Keys**: These represent the topic of the conversation. In the given example, the correlation key is the "order".
|
||||
- **Correlation Keys**: These represent the topic of the conversation.
|
||||
In the given example, the correlation key is the "order".
|
||||
|
||||
- **Correlation Properties**: These are unique identifiers within the conversation. In the example, the "table number" serves as the correlation property, ensuring the right waiter communicates with the right chef.
|
||||
- **Correlation Properties**: These are unique identifiers within the conversation.
|
||||
In the example, the "table number" serves as the correlation property, ensuring the right waiter communicates with the right chef.
|
||||
|
||||
### Execution and Observation
|
||||
|
||||
@ -438,9 +437,8 @@ Once the chef confirms the meal's readiness, a message is sent back to the waite
|
||||
|
||||
For a more visual understanding and a step-by-step walkthrough, you can watch Dan Funk's full tutorial [here](https://www.youtube.com/watch?v=Uk7__onZiVk).
|
||||
|
||||
|
||||
---
|
||||
## How to share process instance with Short Links
|
||||
## How to Share Process Instances with Short Links
|
||||
|
||||
The short link feature provides a convenient way to share process instances with others without the need to copy and paste lengthy URLs.
|
||||
This feature is especially useful for quick sharing via email, messaging apps, or within documentation.
|
||||
@ -448,8 +446,7 @@ This feature is especially useful for quick sharing via email, messaging apps, o
|
||||
To copy the short link:
|
||||
|
||||
- **Access the Process Instance**: Open the process instance that you wish to share.
|
||||
- **Find the Short Link Icon**: Look for the link icon near the process instance heading and click on the link icon to copy the short link to your clipboard automatically. please refer to the screenshot provided
|
||||
|
||||
- **Find the Short Link Icon**: Look for the link icon near the process instance heading and click on the link icon to copy the short link to your clipboard automatically. Please refer to the screenshot provided.
|
||||
|
||||
![Short Link](images/Short_link.png)
|
||||
|
||||
@ -460,19 +457,22 @@ Now, you can paste the short link into your desired communication medium to shar
|
||||
|
||||
To access and review completed user forms within a specific process model, follow these guidelines:
|
||||
|
||||
1. **Find the Tasks Tab in Process Instance**: Begin by going to the process instance and scroll to locate the 'Tasks' tab. This area displays all user forms connected to the process.
|
||||
1. **Find the Tasks Tab in Process Instance**: Begin by going to the process instance and scrolling to locate the 'Tasks' tab. This area displays all user forms connected to the process.
|
||||
|
||||
2. **Examine Completed Forms**:
|
||||
- **Forms You Completed**: In this section, you can view the forms that you have completed. It allows you to see the specific details and inputs you provided in each task.
|
||||
- **Forms You Completed**: In this section, you can view the forms that you have completed.
|
||||
It allows you to see the specific details and inputs you provided in each task.
|
||||
![Completed by me](images/Completed_by_me.png)
|
||||
|
||||
**Forms Completed by Others**: This part shows all the forms completed by any user. You can see who completed each form and the last time it was updated. However, for privacy and security reasons, you won't be able to view the specific input details of forms completed by others.
|
||||
- **Forms Completed by Others**: This part shows all the forms completed by any user.
|
||||
You can see who completed each form and the last time it was updated.
|
||||
However, for privacy and security reasons, you won't be able to view the specific input details of forms completed by others.
|
||||
![Completed by others](images/Completed_by_others.png)
|
||||
|
||||
This approach ensures you can monitor and review the progress of user forms within any process model while maintaining the confidentiality of inputs made by other users.
|
||||
|
||||
---
|
||||
## How to view task instance history
|
||||
## How to View Task Instance History
|
||||
|
||||
Monitoring the history of task instances is helpful for tracking the progress and execution details of a workflow.
|
||||
This guide provides a step-by-step approach to access and understand the task instance history, including the interpretation of task statuses.
|
||||
@ -481,23 +481,25 @@ This guide provides a step-by-step approach to access and understand the task in
|
||||
|
||||
1. **Run the Process**: Initiate a workflow process in SpiffWorkflow.
|
||||
|
||||
2. **Access the Process Instance**: After running the process, navigate to the specific process instance within the SpiffWorkflow interface. This is where you can track the progress of the tasks.
|
||||
2. **Access the Process Instance**: After running the process, navigate to the specific process instance within the SpiffWorkflow interface.
|
||||
This is where you can track the progress of the tasks.
|
||||
![Access process instance](images/Access_Process_Instance.png)
|
||||
|
||||
3. **View Task Details**: Click on the executed task or event that has been completed. For instance, in this example we clicked on "user task".
|
||||
3. **View Task Details**: Click on the executed task or event that has been completed.
|
||||
For instance, in this example, we clicked on "user task".
|
||||
![Access task instance](images/Task_instance.png)
|
||||
|
||||
You will be presented with detailed information about each task instance, including its status and execution timestamp.
|
||||
|
||||
For example:
|
||||
- `2 : 04-01-2024 19:58:11 - MAYBE`
|
||||
- `3 : 04-01-2024 19:58:10 - COMPLETED`
|
||||
- `4 : 04-01-2024 19:58:07 - COMPLETED`
|
||||
- `2: 04-01-2024 19:58:11 - MAYBE`
|
||||
- `3: 04-01-2024 19:58:10 - COMPLETED`
|
||||
- `4: 04-01-2024 19:58:07 - COMPLETED`
|
||||
![Access task instance](images/task_instance_history.png)
|
||||
|
||||
|
||||
- **COMPLETED Status**: Tasks marked as 'COMPLETED' have finished their execution successfully and have moved the workflow forward.
|
||||
- **MAYBE Status**: Indicates that the task still exists within SpiffWorkflow. While these tasks could be omitted for clarity, retaining them provides a complete picture of the workflow's execution.
|
||||
- **MAYBE Status**: Indicates that the task still exists within SpiffWorkflow.
|
||||
While these tasks could be omitted for clarity, retaining them provides a complete picture of the workflow's execution.
|
||||
|
||||
Viewing task instance history in SpiffWorkflow is now more streamlined and informative, thanks to recent updates.
|
||||
Users can effectively track each task's execution, status, and timing, gaining insights into the workflow's overall performance.
|
||||
|
@ -61,6 +61,7 @@ In that case, Python won't recognize MySQLdb as a module, so after the above ins
|
||||
import pymysql
|
||||
pymysql.install_as_MySQLdb()
|
||||
```
|
||||
|
||||
### 5. Access Hosted Version of Spiff
|
||||
|
||||
If you prefer not to install anything locally:
|
||||
|
@ -34,7 +34,7 @@ This guide will walk you through the steps to modify this message in SpiffWorkfl
|
||||
|
||||
![Launching Editor](images/onboarding_4.png)
|
||||
|
||||
After making your desired modifications, save the changes to update the welcome message.
|
||||
After making your desired modifications, save the changes to update the welcome message.
|
||||
|
||||
Once you've updated the welcome message, it will be displayed prominently on the home page after users log in.
|
||||
The message will be positioned in a way that it's one of the first things users see, ensuring they receive the intended greeting every time they access the platform.
|
||||
|
@ -126,7 +126,9 @@ A previously completed section is now active and shown in yellow.
|
||||
![Reset](images/reset_process5.png)
|
||||
|
||||
> **Step 8: "Resume" process instance.
|
||||
** The process instance should be resumed by selecting the ‘Resume’ icon next to the Process Instance Id.
|
||||
**
|
||||
|
||||
The process instance should be resumed by selecting the ‘Resume’ icon next to the Process Instance Id.
|
||||
|
||||
![Reset](images/reset_process6.png)
|
||||
|
||||
|
@ -27,7 +27,7 @@ This is the visual platform where business processes are represented and mapped
|
||||
|
||||
## Call Activity
|
||||
|
||||
This refers to the act of a parent or higher-level process invoking a pre-defined or reusable child process, which is represented in another process diagram.
|
||||
This refers to the act of a parent or higher-level process invoking a predefined or reusable child process, which is represented in another process diagram.
|
||||
This invocation allows for the utilization of the child process multiple times, enhancing reusability within the overall model.
|
||||
|
||||
## Collapsed Subprocess
|
||||
@ -84,7 +84,7 @@ There are four types of Gateways: Exclusive, Parallel, Inclusive, and Event-Base
|
||||
|
||||
## Intermediate Event
|
||||
|
||||
This is an event that occurs within the middle of a process, neither at the start nor the end.
|
||||
This is an event that occurs in the middle of a process, neither at the start nor the end.
|
||||
It can be connected to other tasks through connectors or placed on the border of a task.
|
||||
It evaluates conditions and circumstances, triggering events and enabling the initiation of alternative paths within the process.
|
||||
|
||||
@ -99,7 +99,7 @@ These are subdivisions within a Pool that are utilized to assign activities to s
|
||||
## Merge
|
||||
|
||||
This is the process in which two or more parallel Sequence Flow paths converge into a single path, achieved either through multiple incoming Sequence Flows or by utilizing an Exclusive Gateway.
|
||||
This merging of paths is also commonly referred to as an "OR-Join".
|
||||
This merging of paths is also commonly referred to as an "OR-Join."
|
||||
|
||||
## Message
|
||||
|
||||
|
@ -7,5 +7,10 @@ Activities can be classified into three categories: Task, Subprocess, and Call A
|
||||
These activities can be either atomic or non-atomic.
|
||||
Atomic activities are indivisible and represent single tasks, while non-atomic activities involve multiple steps or subprocesses that work together to achieve a larger objective.
|
||||
|
||||
## Workflow
|
||||
## Process Instance
|
||||
|
||||
A Process Instance is a specific occurrence of a Process Model that is executed within a workflow engine.
|
||||
|
||||
## Process Model
|
||||
|
||||
A Process Model is a visual representation of a process that defines the sequence of activities, decisions, and interactions required to achieve a particular goal.
|
||||
|
@ -7,11 +7,10 @@ function error_handler() {
|
||||
trap 'error_handler ${LINENO} $?' ERR
|
||||
set -o errtrace -o errexit -o nounset -o pipefail
|
||||
|
||||
file="Building_Diagrams/data_stores.md"
|
||||
# example input
|
||||
# file="Building_Diagrams/data_stores.md"
|
||||
|
||||
file_to_use="${1:-$file}"
|
||||
|
||||
gitc "$file_to_use"
|
||||
python bin/gpt-proofread.py "$file_to_use"
|
||||
qmd_file="$(echo "$file_to_use" | sed 's/\.md$/.qmd/')"
|
||||
mv "$qmd_file" "$file_to_use"
|
||||
|
@ -9,18 +9,25 @@ set -o errtrace -o errexit -o nounset -o pipefail
|
||||
|
||||
# function to update single file
|
||||
function update_file() {
|
||||
markdown_to_ventilated_prose.py "$1" "$1"
|
||||
./bin/edit "$1"
|
||||
markdown_to_ventilated_prose.py "$1" "$1"
|
||||
local file="$1"
|
||||
if [[ "$file" == "./Support/FAQ.md" ]]; then
|
||||
echo "skipping $file since it is in a question and answer format that LLMs cannot handle. They assume you are doing few-shot learning and do not return the full doc."
|
||||
return
|
||||
fi
|
||||
markdown_to_ventilated_prose.py "$file" "$file"
|
||||
./bin/edit "$file"
|
||||
markdown_to_ventilated_prose.py "$file" "$file"
|
||||
}
|
||||
|
||||
# while IFS= read -r -d '' file; do
|
||||
# update_file "$file"
|
||||
# done < <(find . -type f -name "*.md" -print0)
|
||||
while IFS= read -r -d '' file; do
|
||||
update_file "$file"
|
||||
done < <(find . -type f -name "*.md" -print0)
|
||||
|
||||
# update_file "Support/Welcome_Messages.md"
|
||||
|
||||
echo 'fyi, running test files, not all files'
|
||||
# these are long, problematic files, good for testing.
|
||||
# not sure why documentation.md likes to get lots of extra newlines added.
|
||||
for file in Getting_Started/quick_start.md Support/FAQ.md documentation/documentation.md; do
|
||||
update_file "$file"
|
||||
done
|
||||
# echo 'fyi, running test files, not all files'
|
||||
# for file in Getting_Started/quick_start.md Support/FAQ.md documentation/documentation.md; do
|
||||
# update_file "$file"
|
||||
# done
|
||||
|
@ -2,9 +2,10 @@
|
||||
# and then modified for our use case.
|
||||
import sys
|
||||
import os
|
||||
import difflib
|
||||
import os.path
|
||||
from langchain.prompts import PromptTemplate
|
||||
from langchain.chat_models import ChatOpenAI
|
||||
from langchain.text_splitter import MarkdownTextSplitter
|
||||
from langchain_openai import ChatOpenAI
|
||||
from langchain.text_splitter import CharacterTextSplitter
|
||||
from langchain.prompts.chat import (
|
||||
ChatPromptTemplate,
|
||||
@ -33,24 +34,30 @@ human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
|
||||
# - short and focused
|
||||
# - clear over fun
|
||||
# - brief over verbose
|
||||
# - Do not leave any trailing spaces (handled by another script, though)
|
||||
# - Never remove entire sentences (didn't seem necessary, since we said keep everything else exactly the same)
|
||||
|
||||
system_text = """You are proofreading and you will receive text that is almost exactly correct, but may contain errors. You should:
|
||||
system_text = """You are proofreading a markdown document and you will receive text that is almost exactly correct, but may contain errors. You should:
|
||||
|
||||
- Fix spelling
|
||||
- Not edit URLs
|
||||
- Never touch a markdown link; these might look like: [Image label](images/Manual_instructions_panel.png)
|
||||
- Improve grammar that is obviously wrong
|
||||
- Fix awkward language if it is really bad
|
||||
- keep everything else exactly the same, including tone and voice
|
||||
- don't change markdown syntax, e.g. keep [@reference]
|
||||
- Never remove entire sentences
|
||||
- never cut jokes
|
||||
- output 1 line per sentence (same as input)
|
||||
- Do not put multiple sentences on the same line
|
||||
- Do not leave any trailing spaces
|
||||
- Make sure you do not remove the first header in a file that begins with a single #
|
||||
- Keep everything else exactly the same, including tone and voice
|
||||
- not change the case of words unless they are obviously wrong
|
||||
- Avoid changing markdown syntax, e.g. keep [@reference]
|
||||
- Output one line per sentence (same as input)
|
||||
- Avoid putting multiple sentences on the same line
|
||||
- Make sure you do not remove any headers at the beginning of the text (markdown headers begin with one or more # characters)
|
||||
|
||||
The markdown document follows. The output document's first line should probably match that of the input document, even if it is a markdown header.
|
||||
"""
|
||||
|
||||
system_prompt = SystemMessage(content=system_text)
|
||||
|
||||
EDIT_DIR = "/tmp/edits"
|
||||
|
||||
openai_api_key = os.environ.get("OPENAI_API_KEY")
|
||||
if openai_api_key is None:
|
||||
keyfile = "oai.key"
|
||||
@ -65,33 +72,98 @@ model = "gpt-4o"
|
||||
llm = ChatOpenAI(openai_api_key=openai_api_key, model=model, request_timeout=240)
|
||||
|
||||
|
||||
def read_file(file_path):
|
||||
with open(file_path, "r") as f:
|
||||
return f.read()
|
||||
|
||||
|
||||
def split_content(content, chunk_size=13000):
|
||||
splitter = CharacterTextSplitter(chunk_size=chunk_size, chunk_overlap=0)
|
||||
return splitter.split_text(content)
|
||||
|
||||
|
||||
def process_chunk(doc, chat_prompt, retries=3, chunk_index=0):
|
||||
for attempt in range(retries):
|
||||
result = llm.invoke(chat_prompt.format_prompt(text=doc).to_messages())
|
||||
edited_result_content = result.content
|
||||
if 0.95 * len(doc) <= len(edited_result_content) <= 1.05 * len(doc):
|
||||
return edited_result_content
|
||||
print(f"Retry {attempt + 1} for chunk due to size mismatch.")
|
||||
raise ValueError("Failed to process chunk after retries.")
|
||||
|
||||
|
||||
def get_edited_content(docs, chat_prompt):
|
||||
edited_content = ""
|
||||
for i, doc in enumerate(docs):
|
||||
edited_result_content = process_chunk(doc, chat_prompt, chunk_index=i)
|
||||
edited_content += edited_result_content + "\n"
|
||||
return edited_content
|
||||
|
||||
|
||||
def analyze_diff(diff_file_path):
|
||||
diff_content = read_file(diff_file_path)
|
||||
analysis_prompt = f"""
|
||||
You are an expert technical editor.
|
||||
Please analyze the following diff and ensure it looks like a successful copy edit of a markdown file.
|
||||
Editing URLs is not allowed; never touch a link like [Image label](images/Manual_instructions_panel.png)
|
||||
It is not a successful edit if line one has been removed (editing is fine; removing is not).
|
||||
It is not a successful edit if three or more lines in a row have been removed without replacement.
|
||||
Edits or reformats are potentially good, but simply removing or adding a bunch of content is bad.
|
||||
Provide feedback if there are any issues.
|
||||
If it looks good, just reply with the single word: good
|
||||
|
||||
Diff:
|
||||
{diff_content}
|
||||
"""
|
||||
result = llm.invoke([HumanMessage(content=analysis_prompt)])
|
||||
return result.content
|
||||
|
||||
|
||||
def process_file(input_file):
|
||||
output_file = os.path.splitext(input_file)[0] + ".qmd"
|
||||
content = read_file(input_file)
|
||||
docs = split_content(content)
|
||||
print(f"Split into {len(docs)} docs")
|
||||
|
||||
with open(input_file, "r") as f:
|
||||
content = f.read()
|
||||
|
||||
# Markdown splitter didn't work so well
|
||||
# splitter = MarkdownTextSplitter(chunk_size=1000, chunk_overlap=0)
|
||||
|
||||
# FIXME: actually split
|
||||
# splitter = CharacterTextSplitter.from_tiktoken_encoder(chunk_size=1000, chunk_overlap=0)
|
||||
# docs = splitter.split_text(content)
|
||||
docs = [content]
|
||||
|
||||
print("Split into {} docs".format(len(docs)))
|
||||
chat_prompt = ChatPromptTemplate.from_messages(
|
||||
[system_prompt, human_message_prompt]
|
||||
)
|
||||
os.makedirs(EDIT_DIR, exist_ok=True)
|
||||
|
||||
with open(output_file, "w") as f:
|
||||
for doc in docs:
|
||||
print(f"doc: {doc}")
|
||||
result = llm(chat_prompt.format_prompt(text=doc).to_messages())
|
||||
print(result.content)
|
||||
f.write(result.content + "\n")
|
||||
# Save the original content for diff generation
|
||||
original_content = content
|
||||
|
||||
print(f"Edited file saved as {output_file}")
|
||||
edited_content = get_edited_content(docs, chat_prompt)
|
||||
temp_output_file = f"{EDIT_DIR}/edited_output.md"
|
||||
|
||||
overall_result = None
|
||||
if edited_content == original_content:
|
||||
print(f"{input_file}: No edits made.")
|
||||
return "no_edits"
|
||||
|
||||
with open(temp_output_file, "w") as f:
|
||||
f.write(edited_content)
|
||||
|
||||
# Generate and save the diff for the whole file based on the basename of the input file
|
||||
input_basename = os.path.basename(input_file)
|
||||
diff_file_path = f"{EDIT_DIR}/{input_basename}.diff"
|
||||
diff = difflib.unified_diff(
|
||||
original_content.splitlines(), edited_content.splitlines(), lineterm=""
|
||||
)
|
||||
with open(diff_file_path, "w") as diff_file:
|
||||
diff_file.write("\n".join(diff))
|
||||
|
||||
# Analyze the diff
|
||||
analysis_result = analyze_diff(diff_file_path)
|
||||
|
||||
if analysis_result.lower().strip() == "good":
|
||||
os.replace(temp_output_file, input_file)
|
||||
print(f"{input_file}: edited!")
|
||||
return "edited"
|
||||
else:
|
||||
print(
|
||||
f"{input_file}: The diff looked suspect. Diff analysis result: {analysis_result}"
|
||||
)
|
||||
return "suspect_diff"
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
@ -99,4 +171,6 @@ if __name__ == "__main__":
|
||||
print("Usage: python script.py input_file")
|
||||
else:
|
||||
input_file = sys.argv[1]
|
||||
process_file(input_file)
|
||||
overall_result = process_file(input_file)
|
||||
with open(f"{EDIT_DIR}/proofread_results.txt", "a") as f:
|
||||
f.write(f"{input_file}: {overall_result}\n")
|
||||
|
@ -1,2 +1,3 @@
|
||||
langchain
|
||||
langchain-openai
|
||||
openai
|
||||
|
@ -29,9 +29,9 @@ Services are allowed to use models, but models are not allowed to use services.
|
||||
Services cannot use controllers.
|
||||
Keeping calls flowing in a single direction makes things easier to understand and avoids circular imports.
|
||||
|
||||
- We have a general notion that services should not call other services (or at least it must be calls in a single direction. If you call serviceB with serviceA, then serviceB cannot call serviceA)
|
||||
- We have a general notion that services should not call other services (or at least it must be calls in a single direction. If you call serviceB with serviceA, then serviceB cannot call serviceA).
|
||||
- Services should get called by routes.
|
||||
- We have a general notion that services can call models, but models should not call services (again, to avoid circular dependencies)
|
||||
- We have a general notion that services can call models, but models should not call services (again, to avoid circular dependencies).
|
||||
|
||||
### Models
|
||||
|
||||
@ -49,7 +49,7 @@ When serializing models to JSON:
|
||||
|
||||
- Avoid json.dumps when you are creating JSON. Use jsonify (a Flask thing) instead.
|
||||
- Avoid Marshmallow when possible and instead use @dataclass on your model.
|
||||
- If you need to represent your object in a very custom way (the default dataclass columns are not working out), write a method called serialized on your model (this is used by the default serializer).
|
||||
- If you need to represent your object in a very custom way (the default dataclass columns are not working out), write a method called `serialized` on your model (this is used by the default serializer).
|
||||
|
||||
## Exceptions
|
||||
|
||||
|
@ -25,7 +25,7 @@ graph TD
|
||||
|
||||
```
|
||||
|
||||
Connector Proxies are containers for connectors.
|
||||
Connector proxies are containers for connectors.
|
||||
Connectors are usually Python libraries that are included in connector proxy codebases, but they can also be embedded directly inside of connector proxies.
|
||||
Our connector-proxy-demo includes a few connectors, including [connector-aws](https://github.com/sartography/connector-aws) and [connector-http](https://github.com/sartography/connector-http).
|
||||
Connector-http can be used for many API interactions, but you can also [write your own connectors](/dev/how_to_build_a_connector).
|
||||
|
@ -35,15 +35,15 @@ To create your own custom extension, follow these steps:
|
||||
|
||||
![Extension](images/Extension_UI_schema.png)
|
||||
|
||||
As an example, we have created an extension that adds a link to the profile menu in the top right, and also adds a new "Support" page to the app so that users of the application know who to talk to if they have issues.
|
||||
As an example, we have created an extension that adds a link to the profile menu in the top right and also adds a new "Support" page to the app so that users of the application know who to talk to if they have issues.
|
||||
You can find the full example [on GitHub](https://github.com/sartography/sample-process-models/tree/sample-models-1/extensions/support).
|
||||
|
||||
Notice how the `display_location` "user_profile_item" tells it that the link belongs in the user profile menu (this is the top right menu where the logout link can be found).
|
||||
Also notice that the extension uischema defines a page ("/support"), and defines the list of components that should show up on this page.
|
||||
Also, notice that the extension uischema defines a page ("/support") and defines the list of components that should show up on this page.
|
||||
In this case, that is just a single MarkdownRenderer, which defines how to contact people.
|
||||
|
||||
An entirely new application feature with frontend and backend components can be implemented using an extension.
|
||||
[This typescript interface file](https://github.com/sartography/spiff-arena/blob/main/spiffworkflow-frontend/src/extension_ui_schema_interfaces.ts) codifies the configuration of the extension uischema.
|
||||
[This TypeScript interface file](https://github.com/sartography/spiff-arena/blob/main/spiffworkflow-frontend/src/extension_ui_schema_interfaces.ts) codifies the configuration of the extension uischema.
|
||||
|
||||
## Use Cases
|
||||
|
||||
@ -55,9 +55,8 @@ Here are some of the use cases already implemented by our users:
|
||||
- Creating custom reports tailored to your business metrics
|
||||
- Incorporating arbitrary content into custom pages using markdown (as in the above example)
|
||||
- Creating and accessing tailor-made APIs
|
||||
- Rendering the output of these APIs using jinja templates and markdown
|
||||
- Rendering the output of these APIs using Jinja templates and markdown
|
||||
|
||||
Extensions in SpiffArena offer a robust mechanism to tailor the software to unique business requirements.
|
||||
When considering an extension, also consider whether the code would be more properly included in the core source code or as a connector inside your [connector proxy](/dev/connector_proxy.md).
|
||||
In cases where an extension is appropriate, by following the instructions in this guide, organizations can expand the system's functionality to meet their unique needs.
|
||||
|
||||
|
@ -3,6 +3,7 @@
|
||||
While existing connectors like connector-http are very flexible, you may choose to build a connector for a specific use case.
|
||||
|
||||
To get an idea of what you are in for, take a look at existing connectors:
|
||||
|
||||
* [connector-http](https://github.com/sartography/connector-http/blob/main/src/connector_http/commands/get_request_v2.py)
|
||||
* [connector-smtp](https://github.com/sartography/connector-smtp/blob/main/src/connector_smtp/commands/send_email.py)
|
||||
|
||||
|
@ -34,7 +34,7 @@ flowchart LR
|
||||
## Security
|
||||
|
||||
We have security checks in place for both the backend and the frontend.
|
||||
These include the security lib in the backend, and Snyk in the frontend and backend.
|
||||
These include the security library in the backend, and Snyk in the frontend and backend.
|
||||
Two independent security reviews have been performed on the codebase, and mitigations have been implemented to the satisfaction of the reviewers.
|
||||
|
||||
## Contributing
|
||||
|
@ -2,7 +2,7 @@
|
||||
|
||||
This documentation is currently hosted live at [Spiff-Arena's ReadTheDocs](https://spiff-arena.readthedocs.io/en/latest/).
|
||||
|
||||
Please set aside a couple of hours to work through this, as getting this setup correctly once is 10,000 times better than having problems every day for the rest of your life.
|
||||
Please set aside a couple of hours to work through this, as getting this set up correctly once is 10,000 times better than having problems every day for the rest of your life.
|
||||
|
||||
## Our Methodology
|
||||
|
||||
@ -55,7 +55,7 @@ Our project is managed by a version control system called Git.
|
||||
You can use Git to submit changes to the documentation, in the same way we use to submit changes to our code.
|
||||
It is available on GitHub as the [spiff-arena project](https://github.com/sartography/spiff-arena).
|
||||
GitHub also manages versions of the code and handles running tests.
|
||||
Readthedocs observes changes in git and manages an automated process that triggers our documentation to be built and deployed.
|
||||
Readthedocs observes changes in Git and manages an automated process that triggers our documentation to be built and deployed.
|
||||
It will take a bit to get comfortable with Git, but when you do, you will come to love it (or maybe hate it, but with a lot of respect).
|
||||
|
||||
## Setup
|
||||
@ -65,7 +65,7 @@ But you will find that most of it just works - and that once you get into a regu
|
||||
|
||||
### Step 1: Pre-Requisites
|
||||
|
||||
Ensure you have been granted write access to our git repository.
|
||||
Ensure you have been granted write access to our Git repository.
|
||||
Make sure you have an account on GitHub and then contact `dan@sartography.com` and ask him to add you as a contributor.
|
||||
|
||||
### Step 2: Install VSCode
|
||||
@ -106,10 +106,10 @@ Now click on the two pieces of paper at the top corner of your screen, and you s
|
||||
|
||||
- Inside VSCode, go to File -> Preferences -> Extensions
|
||||
- Search for "myst"
|
||||
- click the "install" button.
|
||||
- Click the "install" button.
|
||||
- Repeat, this time installing the "Python" extension for VSCode (from Microsoft)
|
||||
|
||||
![Myst Extension](./images/myst.png "Search or MyST in extensions")
|
||||
![Myst Extension](./images/myst.png "Search for MyST in extensions")
|
||||
|
||||
### Step 7: Install Python Dependencies
|
||||
|
||||
|
@ -5,16 +5,17 @@ The following is a list of enhancements we wish to complete in the near (or even
|
||||
## Performance / System Improvements
|
||||
|
||||
### Benchmarking / Performance Testing
|
||||
|
||||
Automated tests that ensure our performance remains consistent as we add features and functionality.
|
||||
|
||||
### Support Multiple Connector Proxies
|
||||
|
||||
Service Tasks have been a huge win; there are multiple reasons why supporting more than one Connector Proxy would be beneficial:
|
||||
|
||||
1. Connect to several separately hosted services
|
||||
2. Support multiple services written in multiple languages
|
||||
3. Allow some connectors to be local (HTTP GET/POST) vs. remote (Xero/Coin Gecko)
|
||||
4. Could support non-HTTP based connectors (Git interactions could be a workflow)
|
||||
1. Connect to several separately hosted services.
|
||||
2. Support multiple services written in multiple languages.
|
||||
3. Allow some connectors to be local (HTTP GET/POST) vs. remote (Xero/Coin Gecko).
|
||||
4. Could support non-HTTP-based connectors (Git interactions could be a workflow).
|
||||
|
||||
### Interstitial Performance
|
||||
|
||||
@ -31,9 +32,9 @@ There are a number of useful BPMN components that we do not currently support.
|
||||
We should evaluate these and determine which ones we should support and how we should support them.
|
||||
We should consider creating a list of unsupported items.
|
||||
|
||||
* Compensation Events (valuable, but difficult)
|
||||
* Conditional events
|
||||
* Event Sub-Processes are not currently supported (low-hanging fruit, easy to add)
|
||||
* Compensation Events (valuable but difficult).
|
||||
* Conditional events.
|
||||
* Event Sub-Processes are not currently supported (low-hanging fruit, easy to add).
|
||||
|
||||
### Decentralized / Distributed Deployments
|
||||
|
||||
@ -49,6 +50,7 @@ This is not as far-fetched or difficult as it may initially seem.
|
||||
While Python is notoriously bad at parallel execution (the lovely GIL), we have already taken the most critical steps to ensuring it is possible:
|
||||
1. A team has demonstrated parallel execution using the core SpiffWorkflow library.
|
||||
2. We can keep a configurable number of "background" SpiffArena processes running that can pick up waiting tasks.
|
||||
|
||||
Given these things are already in place, we just need to lock processes at the task or branch level so that ready tasks on parallel branches can be picked up by different background processes at the same time.
|
||||
|
||||
### BPMN Definitions at Save Time vs. Run Time
|
||||
@ -59,6 +61,7 @@ This will also allow us to do some early and deep validation as well.
|
||||
## End User Experience
|
||||
|
||||
### UI Overview
|
||||
|
||||
We could really use a good UI/UX review of the site and take a stab at cleaning up the whole site to follow some consistent design patterns and resolve potential issues.
|
||||
|
||||
### Customizable Home Page (Non-Status Specific)
|
||||
|
Loading…
x
Reference in New Issue
Block a user