Commit Graph

77 Commits

Author SHA1 Message Date
Kelly McDonald 60f5be1aef Check in - pending change from @cullerton 2021-03-30 12:10:49 -04:00
Dan 576aebab19 Sometimes a workflow can be completely broken and unloadable. For instance, it starts off with a engine step / script task that will not execute. So the failure happens the second it is processed. When calling reset on a workflow, we should catch the failure, and reset the workflow, so we aren't trapped in an unrecoverable state.
To do so, I changed the WorkflowProcessor's reset method to be static, it will try to instantiate the current workflow and execute a cancel_notify event, but if that fails, it will continue and always return a new workflow processor using the latest spec.
2021-03-09 11:52:55 -05:00
Dan 291216e081 Fixing a bug that prevented proper evaluation of a enum field where the default value was null or empty. 2021-03-08 14:00:03 -05:00
Dan 9e758c57d3 Merge branch 'dev' into bug/225_enum_lookup_same_field_name 2021-03-02 14:15:58 -05:00
Dan aac3d5c16e Bug #255, this requires the front end to pass in the name of the task, when doing a lookup. This will prevent a bug where we have multiple user tasks, with enum fields that set the same variable, but use different lookup tables to populate the dropbown or search feature. 2021-03-01 14:54:04 -05:00
mike cullerton 9d804c36d5 Merge branch 'dev' into customer-error-messages-192.
Needed some sql changes from dev
2021-03-01 10:29:10 -05:00
Dan ff8a44a7b1 Make sure we always have a current user, even if the very first task is a script task. 2021-02-16 12:42:59 -05:00
mike cullerton 39eb5c5c21 This is a decent beginning framework for customer error messages.
It is not complete, but is in a state where we can start to interact with the front end.
Two tests are failing.
Committing so I can work on an error for Alex.
2021-02-12 14:18:42 -05:00
mike cullerton 7e76639cf3 When a study is put on hold, we now reset workflows and call any pending cancel_notify events.
In api.study.update_study we test the study status and call the new WorkflowService method process_workflows_for_cancels.
In services.workflow_service we added the new method process_workflows_for_cancels. It loops through workflows for a study, and resets them if they are in progress.
In services.workflow_processor, we changed the reset method to be an instance method so we can call self.cancel_notify.
In tests.test_lookup_service we changed the call to WorkflowProcessor.reset to reflect the change from class method to instance method
2021-01-29 14:05:07 -05:00
Dan 41dac33039 Removed two tests around soft_reset that are no longer required.
Reset should construct the workflow_processor, thus completing the reset process.
Removed the spec argument from init, as it is never used anywhere.
2021-01-20 13:24:53 -05:00
mike cullerton 039a1dcd79 WorkflowProcessor code for new restart_workflow api endpoint. WorkflowProcessor.reset will clear workflow_model.bpmn_workflow_json, and if clear_data is True will clear form data.
WorkflowProcessor.__init__ is mostly unchanged.
2021-01-19 15:14:36 -05:00
mike cullerton 6a9e6d3570 Restart workflow without form data. Committing so Dan can check it out 2021-01-14 15:32:14 -05:00
Dan e5a38874f6 A hard or soft reset should also cause a 'cancel_notify' which will kick off any CANCEL events. If this happens during
a hard or soft reset, and an error is thrown trying to fulfil a cancel event, the reset should still fire.
Sending emails still had a number of issues correctly parsing it's arguments.  This is corrected.
2020-12-29 18:05:13 -05:00
Kelly McDonald 94662c560e Added the ability to use the custom functions within a decision/flow coming off of an exclusive join
Altered previous test to test this feature
2020-12-01 09:13:12 -05:00
Kelly McDonald 849a4e9f94 Made changes to how soft_reset works so it is backward compatible - essentially, we use the file if we are doing a soft-reset or a new workflow. NB, this may actually still have problems if we have a workflow with P/SMI and we do a soft-reset on it even if there are no structural changes.
Also, change how default_value is interpreted, looking at the object attribute rather than the properties list.
2020-10-09 11:00:33 -04:00
Kelly McDonald 8b23cdc05c commit changes before switching branches 2020-10-09 08:46:14 -04:00
Dan Funk 53d09303d8 Validating that field properties are valid - they must exist as constants on the Task model.
Making all the lookup field names consistent.
Fixing the lookup service which was failing at times trying to find the correct field to use for building the lookup table.
Updating validation to check for additional fields and properties.
When connexion level errors occur, wrapping it in an API Error to be consistent.
2020-08-27 14:00:14 -04:00
Carlos Lopez 304ca77e3e Fixing exception within NameError handler 2020-08-14 09:01:15 -06:00
Aaron Louie 73699e66bd Merge branch 'dev' into feature/admin_impersonations 2020-07-30 17:36:20 -04:00
Dan Funk 9704cbcb26 Modifications to the ldap scripts to bring them back in line with what Kelly is doing with the evaluation process. 2020-07-30 13:35:20 -04:00
Dan Funk f15626033d Allow the workflow to be requested without making changes to the workflow - requires that you specify a read_only flag of true, otherwise it assumes that you want a fully prepared workflow with the next ready task set to run. 2020-07-28 13:33:38 -04:00
Kelly McDonald d617af8565 All tests are passing - may need to refactor a bit, / remove comments 2020-07-28 11:02:49 -04:00
Kelly McDonald 70ad3872a7 Fix several bugs, most had an issue with the bpmn document 2020-07-27 12:02:34 -04:00
Kelly McDonald a124e13c6a Replace all legacy style calls with new calls.
Still having issues where we try to eval an empty definition, not quite sure why there is a difference from what we had before. I may need to revert some of it and determine what is going on.
2020-07-24 14:33:24 -04:00
Kelly McDonald f5ab283538 Test of adding in the ability of augmenting the workflow to include internal scripts like StudyInfo
This is the first waypoint on a larger effort to make all of the 'special scripts' that currently require a shebang to be just another python function.
2020-07-24 12:08:46 -04:00
Dan Funk dd0f984347 Drop backwards compatibility of scripts. While this will cause some initial pain, it's less confusing and error prone, and we are still in the development phase of the project. Were this going straight to production we would likely want to keep this backwards compatibility.
Don't parse on spaces if this is python code, so we avoid any errors in processing - spaces should be valid.
2020-07-20 12:26:34 -04:00
Dan Funk b85869905b Merge branch 'cr-connect-92-scripting-enhancements' of github.com:sartography/cr-connect-workflow into cr-connect-92-scripting-enhancements 2020-07-20 11:40:10 -04:00
Dan Funk 06430550c8 Dropping the RRT-Data-Fix, it should have come out already, but had a failing test, so pulling it out now rather than delve into what is going wrong with obsolete code. 2020-07-20 11:39:50 -04:00
Kelly McDonald f415f22ccb Add warning message when we fail due to syntax error and then we try to look up the class as a backup 2020-07-20 10:12:15 -04:00
Kelly McDonald de54b63e20 Process scripts with no shebang (#!) as a regular python script. If there is a shebang, we look up the class as we did before.
I've also made it so that it falls back if we accidentally forget to add a shebang to a study as this would be a breaking change.

With the fallback feature, it should work with unmodified bpmn documents.
2020-07-17 10:56:04 -04:00
Kelly McDonald ab5771024e Check in for sanity check 2020-07-17 09:24:53 -04:00
Dan Funk c7662315aa Assure that any errors that occur during the do_engine_steps is correctly captured and returned to the end user or configurator with enough information for them to act on. 2020-07-15 11:16:35 -04:00
Dan Funk 8a66128189 Bug fixes after bumping the version of Spiffworlflow to the latest - which has fixes for sequential multi-instance. 2020-07-06 15:34:24 -04:00
Dan Funk 150117587e Merge branch 'dev' into feature/parallel_multiinstance_tasks 2020-06-25 23:20:38 -04:00
Aaron Louie c3ceda4c2f Replaces xml.etree with lxml.etree 2020-06-25 14:02:16 -04:00
Carlos Lopez 7116b582e8 Splitting commands properly without losing double quoted strings 2020-06-25 12:01:24 -06:00
Carlos Lopez 5327b469f6 Merge branch 'rrt/dev' into feature/emails-enhancement 2020-06-24 21:46:53 -06:00
Carlos Lopez 4db815a999 Handling incoming values from processor 2020-06-17 21:11:47 -06:00
Dan Funk 3b57adb84c Continuing a major refactor. Some important points:
* TaskEvents now contain the data for each event as it was when the task was completed.
* When loading a task for the front end, if the task was completed previously, we take that data, and overwrite it with the lastest data, allowing users to see previously entered values.
* Pulling in the Admin branch, as there are changes in that branch that are critical to seeing what is happening when we do this thing.
* Moved code for converting a workflow to an API ready data stricture into the Workflow service where it belongs, and out of the API.
* Hard resets just convert to using the latest spec, they don't try to keep the data from the last task.  There is a better way.
* Moving to a previous task does not attept to keep the data from the last completed task.
* Added a function that will fix all the existing RRT data by adding critical data into the TaskEvent model. This can be called with from the flask command line tool.
2020-06-17 17:11:15 -04:00
Dan Funk 1db9401166 Don't put all the data into Spiff Tasks on a reload or backtrack, just store the data that gets submitted each time in the task log, and use that.
This should correct issues with parallel tasks and other complex areas - so we don't have tasks seeing data that isn't along their path.
2020-06-01 17:42:39 -04:00
Dan Funk 2bc735a3f0 Kelly wrote a beautiful method for resetting the workflow that doesn't loose data when reset inside a parallel task, all I needed to do was use it.
Catching TypeErrors and reporting them back to the UI so we don't 500 in a bad way (but we still 500)
2020-05-31 13:48:00 -04:00
Dan Funk afb6be7c60 re-working the way the redirects function, so we pass arguments as a get parameter. Just trying to get rid of the weird lag on production.
I noticed the validation sometimes looks ahead for files, so looking at all the tasks now, not just the ready tasks for the lookup field.
Ran into an issue with validation where a workflow model was required, so I create one and delete it.  Another refactor for another day.
2020-05-29 04:42:48 -04:00
Dan Funk 11413838a7 Faster lookup fields. We were parsing the spec each time to get details about how to search. We're just grabbing the workflow id and task id now and building that straight into the full text search index for faster lookups. Should be peppy.
Another speed improvement - data in the FileDataModel is deferred, and not queried until it is specifically used, as the new data structures need to use this model frequently.
2020-05-29 01:39:39 -04:00
Dan Funk dba41f4759 Ludicrously stupid launch in a refactor of the way all files work in the system at a time where I crave sleep and peace above all other things.
Added a File class, that we wrap around the FileModel so the api endpoints don't change, but File no longer holds refences to versions or dates of the file_data model, we
figure this out based on a clean database structure.

The ApprovalFile is directly related to the file_data_model - so no chance that a reviewer would review the incorrect version of a file.py

Noticed that our FileType enum called "bpmn" "bpmm", hope this doesn't screw someone up.

Workflows are directly related to the data_models that create the workflow spec it needs.  So the files should always be there.  There are no more hashes, and thus no more hash errors where it can't find the files to rebuild the workflow.py

Not much to report here, other than I broke every single test in the system at one point.  So I'm super concerned about this, and will be testing it a lot before creating the pull request.
2020-05-28 20:03:50 -04:00
Dan Funk be057e8758 Adding an "UpdateStudy" task that is able to update the data on the study model, useful for setting core data points on the model, such as setting the Primary Investigator, or altering the Study Title.
Fixing a bug where the validation of forms did not correctly process auto-complete fields.
Fixing a bug where the approvals script and the update study script could not process dot notation correctly.

Moved populate_random_data into the WorkflowService where it makes more sense.
2020-05-25 15:30:06 -04:00
Dan Funk 971d9a58e9 As we now have an approval_service.py, I moved all the business logic into this service and out of the request_approval.py script. And moved all tests for these features into a test file for the service. Will make it easier to cross reference what is happening, as everything all happens in one file.
As many of the scripts need to know the workflow, and it's down in a weird parameter, moved this so it is passed in each time.
2020-05-24 16:13:15 -04:00
Aaron Louie 2b5687c3a3 Fixes pernicious bug where template document versions were not being updated properly, and template completion script was not honoring version numbers 2020-05-20 00:10:32 -04:00
Aaron Louie 481a9ed04c Gets image ids from task data and Injects images into jinja docx 2020-05-19 21:51:54 -04:00
Dan Funk 53255ef35e massive overhaul of the Workflow API endpoint.
No Previous Task, No Last Task, No Task List.  Just the current task, and the Navigation.
Use the token endpoint to set the current task, even if it is a "READY" task in the api.
Previous Task can be set by identifying the prior task in the Navigation (I'm hoping)
Prefering camel case to snake case on all new apis.  Maybe clean the rest up later.
2020-05-15 15:54:53 -04:00
Dan Funk e723992fde Found a number of bugs with the parallel multi-instance - pulling in some recent changes from Spiffworkflow to open things up a bit more to allow functional jumping between tasks. 2020-05-12 12:23:43 -04:00