- If the default was False, we failed an early test and returned None. We now only return None if default is None
- We have a better test for deciding if the default value we return should be True or False. We used to return True if the value was the string 'false'.
In api.study.update_study we test the study status and call the new WorkflowService method process_workflows_for_cancels.
In services.workflow_service we added the new method process_workflows_for_cancels. It loops through workflows for a study, and resets them if they are in progress.
In services.workflow_processor, we changed the reset method to be an instance method so we can call self.cancel_notify.
In tests.test_lookup_service we changed the call to WorkflowProcessor.reset to reflect the change from class method to instance method
We may want to revisit this because I used a 'Magic' workflow_ref_id of 'REFERENCE_FILES' as a special case so I could re-use most of what I already had. This works well, but we may want to structure it different.
a hard or soft reset, and an error is thrown trying to fulfil a cancel event, the reset should still fire.
Sending emails still had a number of issues correctly parsing it's arguments. This is corrected.
Fixing workflow sync paths that were incorrect.
Repairing a suddenly failing test in files, that just don't make no sense.
Bumping spiffworkflow that contains a fix for issue #155
Currently, we don't have a test for Sentry. Unsure how to do this.
Also added a script, service and test workflow to help. (Also to learn about adding a script and service.)
2. Disabling the token timeout for now, to see if this corrects the issues Alex is having with lost work.
3. Raising more thoughtful error messages for unknown lookup options.
4. Providing better validation of default values and injecting the correct value for defaults related to enum lists of all types.
5. Bumping Spiffworkflow library which contains some better error messages and checks.
Making all the lookup field names consistent.
Fixing the lookup service which was failing at times trying to find the correct field to use for building the lookup table.
Updating validation to check for additional fields and properties.
When connexion level errors occur, wrapping it in an API Error to be consistent.
Still having issues where we try to eval an empty definition, not quite sure why there is a difference from what we had before. I may need to revert some of it and determine what is going on.
This is the first waypoint on a larger effort to make all of the 'special scripts' that currently require a shebang to be just another python function.
I've also made it so that it falls back if we accidentally forget to add a shebang to a study as this would be a breaking change.
With the fallback feature, it should work with unmodified bpmn documents.
Add the lane information to the Task model.
Drop the foreign key constraint on the user_uid in the task log, as we might create tasks for users before they ever log into the system.
Add a new endpoint to the API called task events. It should be possible to query this and get a list of all tasks that need a users attention.
The task events returned include detailed information about the workflow and study as sub-models
Rename all the actions in event log to things that are easier to pass over the api as arguments, make this backwards compatible, updating existing names in the database via the migration.
Throughly test the navigation and task details as control of the workflow is passed between two lanes.
Adding checks in the API to assure the correct person is completeing a task based on the task's lane.
Adding lane to the Navigation item.
Adding a check to assure that unique user ids can be identified using task.data
Added some additional ldap entries to make testing and development easier.
Removed a big chunk of duplicate code from task_tests_api
Modified some of the helper functions to make it easier to test as specific users.
Added some additional bpmn models to the tests for testing lanes and roles.
* The Task.title returned to the front end will now attempt to process the "display_name" property for dot-notation syntax, making it possible to use this for multi-instance tasks,
but will work in all cases where we want he title to change based on values in the data model.
* Fixing a bug the test_study_api where it wasn't updated when we made recent changes to the different states of a study.
*
Adds an optional 'value' parameter to the lookup endpoint so you can find a specific entry in the lookup table.
Makes sure the data attribute returned on a lookup model is a dictionary, and not a string.
Fixes a previous bug that would crop up if double spaces were used when performing a search.
Adds an optional 'id' parameter to the lookup endpoint so you can find a specific entry in the lookup table.
Makes sure the data attribute returned on a lookup model is a dictionary, and not a string.
Fixes a previous bug that would crop up in double spaces were used when performing a search.
In order to allow proper deletion of tasks, we no longer merge data returned from the front end, we set it directly as the task_data.
When returning data to the front end, we take any previous form submission and merge it into the current task data, allowing users to keep their previous submissions.
There is now an "extract_form_data" method that does it's best job to calculate what form data might have changed from the front end.
* TaskEvents now contain the data for each event as it was when the task was completed.
* When loading a task for the front end, if the task was completed previously, we take that data, and overwrite it with the lastest data, allowing users to see previously entered values.
* Pulling in the Admin branch, as there are changes in that branch that are critical to seeing what is happening when we do this thing.
* Moved code for converting a workflow to an API ready data stricture into the Workflow service where it belongs, and out of the API.
* Hard resets just convert to using the latest spec, they don't try to keep the data from the last task. There is a better way.
* Moving to a previous task does not attept to keep the data from the last completed task.
* Added a function that will fix all the existing RRT data by adding critical data into the TaskEvent model. This can be called with from the flask command line tool.
We now cache the LDAP records - so we look in our own database for the record before calling out to ldap for the details when given a straight up computing id like dhf8r.
Added "date_approved" to the approval model.
And moved the approver and primary investigator into real associated models to make it easier to dump.
Fixed a problem with the validation that was causing it to throw incorrect errors on valid workflows. Getting it to behave a little more like the front end behaves, and respecting the read-only fields. But it was mainly to do with always returning all the data with each form submission.
Also, when returning error messages, attempt to include the task data for the task that caused the error.
Also, when attempting to delete any file, respond with an API error explaining the issue, and log the details.
I noticed the validation sometimes looks ahead for files, so looking at all the tasks now, not just the ready tasks for the lookup field.
Ran into an issue with validation where a workflow model was required, so I create one and delete it. Another refactor for another day.
Another speed improvement - data in the FileDataModel is deferred, and not queried until it is specifically used, as the new data structures need to use this model frequently.
Added a File class, that we wrap around the FileModel so the api endpoints don't change, but File no longer holds refences to versions or dates of the file_data model, we
figure this out based on a clean database structure.
The ApprovalFile is directly related to the file_data_model - so no chance that a reviewer would review the incorrect version of a file.py
Noticed that our FileType enum called "bpmn" "bpmm", hope this doesn't screw someone up.
Workflows are directly related to the data_models that create the workflow spec it needs. So the files should always be there. There are no more hashes, and thus no more hash errors where it can't find the files to rebuild the workflow.py
Not much to report here, other than I broke every single test in the system at one point. So I'm super concerned about this, and will be testing it a lot before creating the pull request.
From an API point of view you can do the following (and only the following)
/files?workflow_spec_id=x
* You can find all files associated with a workflow_spec_id, and add a file with a workflow_spec_id
/files?workflow_id=x
* You can find all files associated with a workflow_id, and add a file that is directly associated with the workflow
/files?workflow_id=x&form_field_key=y
* You can find all files associated with a form element on a running workflow, and add a new file.
Note: you can add multiple files to the same form_field_key, IF they have different file names. If the same name, the original file is archived,
and the new file takes its place.
The study endpoints always return a list of the file metadata associated with the study. Removed /studies-files, but there is an
endpoint called
/studies/all - that returns all the studies in the system, and does include their files.
On a deeper level:
The File model no longer contains:
- study_id,
- task_id,
- form_field_key
Instead, if the file is associated with workflow - then that is the one way it is connected to the study, and we use this relationship to find files for a study.
A file is never associated with a task_id, as these change when the workflow is reloaded.
The form_field_key must match the irb_doc_code, so when requesting files for a form field, we just look up the irb_doc_code.
Modifed the request_approval to take a list of arguments, which works better for us... today.
UpdateStudy correctly handles validation.
WorkflowService correctly populates random values from lookup tables.
And several fixes down in Spiffworkflow, including a big bug where only the last item in a decision table made it through.
Fixing a bug where the validation of forms did not correctly process auto-complete fields.
Fixing a bug where the approvals script and the update study script could not process dot notation correctly.
Moved populate_random_data into the WorkflowService where it makes more sense.
Using the LDAP service for checking user details in development mode - even if you are using the back door.
Added a new Flask fucntion load-example-rrt-data that loads the rrt workflow, and not the CRC wrokflows.
Modified the "load-example-data" in the tests to use some test data, rather than loading up all the workflows[
in CRC each time, with a parameter to load crc data if that is required - which is enabled for just a handful of tests.
(Tests run in 1/4 the time now)
Added the following columns:
* date_created - so we know when the file was created
* renamed workflow_version to just "version", because everything has a version, this is the version of the request.
* workflow_hash - this is just a quick way to see what files and versions are associated with the request, it could be factored out.
* study - a quick relationship link to the study, so that this model is easier to use.
* workflow - ditto
* approval_files - these is a list from a new link table that links an approval to specific files and versions.
The RequestApproval is logically sound, but still needs some additional pieces in place to be callable from a BPMN workflow diagram.
Altered the file service to pick up on changes to files vs adding new files, so that versions are picked up correctly as
users modify their submission - adding new files or replacing existing ones. Deleting files worries me, and I will need to revisit this.
The damn base test keeps giving me a headache, so I made changes there to see if clearing and dropping the database each time won't allow the tests to pass more consistently.
Lots more tests around the file service to make sure it is versioning user uploaded files correctly.
The "Test Request Approval Script" tries to find to assure the correct behavior as this is likely to be called many times repeatedly and with little knowledge of the internal system. So it should just "do the right thing".
PB_ENABLED can be set to false in the configuration (either in a file called instance/config.py, or as an environment variable)
Added a check in the base_test, to assure that we are always running tests with the test configuration, and bail out otherwise. Setting TESTING=true as an environment variable will get this, but so well the correct ordering of imports. Just be dead certain the first file every test file imports is base_test.py.
Aaron was right, and we call the Protocol Builder in all kinds of awful places. But we don't do this now. So Carlos, you should have the ability to reuse a lot of the logic in the study_service now.
I dropped the poorly named "study-update" endpoint completely. We weren't using it. POST and PUT to Study still work just fine for doing exactly that.
All the tests now run and pass with the Protocol builder disabled. Tests that specifically check PB behavior turn it back on for the test, or mock it out.
updaing the user 'sso' endpoint to provide additional information for debugging.
Pulling information from ldap to stay super consistent on where we get our information.
Refactored calls into a new lookup_service to keep things tidy.
New keys for all enum/auto-complete fields:
PROP_OPTIONS_FILE = "spreadsheet.name"
PROP_OPTIONS_VALUE_COLUMN = "spreadsheet.value.column"
PROP_OPTIONS_LABEL_COL = "spreadsheet.label.column"
PROP_LDAP_LOOKUP = "ldap.lookup"
FIELD_TYPE_AUTO_COMPLETE = "autocomplete"
No Previous Task, No Last Task, No Task List. Just the current task, and the Navigation.
Use the token endpoint to set the current task, even if it is a "READY" task in the api.
Previous Task can be set by identifying the prior task in the Navigation (I'm hoping)
Prefering camel case to snake case on all new apis. Maybe clean the rest up later.
running all extension/properties through the Jinja template processor so you can have custom display names using data, very helpful for building multi-instance displays.
Properties was returned as an array of key/value pairs, which is just mean. Switched this to a dictionary.
Improving the study_info script documentation to provide detailed examples of values returned based on arguments.
Making the tests a little more targetted and less subject to breaking through better mocks.
Allow all tests to pass even when ther protocol builder mock isn't running locally.
Removing the duplication of reference files in tests and static, as this seems silly to me at the moment.
I noticed we were saving the workflow every time we loaded it up, rather than only when we were making changes to it. Refactored this to be a little more careful.
Centralized the saving of the workflow into one location in the processor, so we can make sure we update all the details about that workflow every time we save.
The workflow service has a method that will log any task action taken in a consistent way.
The stats models were removed from the API completely. Will wait for a use case for dealing with this later.
Because this changes the endpoint for all existing document details, I've modified all the test and static bpmn files to use the new format.
Shorting up the SponsorsList.xls file makes for slightly faster tests. seems senseless to load 5000 everytime we reset the data.
Tried to test all of this carefully in the test_study_details_documents.py test.
Because the name field is now used to expose workflow/sub-process information on tasks, we can't use it to store the workflow_version, so that is now just stored on the database model. Which is much cleaner and removes a duplication.
INCOMPLETE = 'Incomplete in Protocol Builder',
ACTIVE = 'Active / Ready to roll',
HOLD = 'On Hold',
OPEN = 'Open - this study is in progress',
ABANDONED = 'Abandoned, it got deleted in Protocol Builder'
Moving the primary process id from the workflow model to the file model, and assuring it is updated properly. This was causing a bug that would "lose" the workflow.
Found a problem where the documentation for elements was being processed BEFORE data was loaded from a script. There still may be some issues here.
Ran into an issue with circular dependencies - handling it with a new workflow_service, and pulling computational logic out of the api_models - it was the right thing to do.
some ugly fixes in the file_service for improving panda output from spreadsheet processing that I need to revist.
now that the spiff-workflow handles multi-instance, we can't have random multi-instance tasks around.
Improved tests around study deletion.
Adding id and spec_version to the workflow metadata.
Refactoring the processing of the master_spec so that it doesn't polute the workflow database.
Adding tests to assure that the status and counts are updated on the workflow model as users make progress.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
Added a validate_workflow_specification endpoint that allows you to check if the workflow will execute from beginning to end using random data.
Minor fixes to existing bpmns to allow them to pass.
All scripts must include a "do_task_validate_only" that restricts external calls and database modifications, but performs as much logic as possible.