* The Task.title returned to the front end will now attempt to process the "display_name" property for dot-notation syntax, making it possible to use this for multi-instance tasks,
but will work in all cases where we want he title to change based on values in the data model.
* Fixing a bug the test_study_api where it wasn't updated when we made recent changes to the different states of a study.
*
Added a File class, that we wrap around the FileModel so the api endpoints don't change, but File no longer holds refences to versions or dates of the file_data model, we
figure this out based on a clean database structure.
The ApprovalFile is directly related to the file_data_model - so no chance that a reviewer would review the incorrect version of a file.py
Noticed that our FileType enum called "bpmn" "bpmm", hope this doesn't screw someone up.
Workflows are directly related to the data_models that create the workflow spec it needs. So the files should always be there. There are no more hashes, and thus no more hash errors where it can't find the files to rebuild the workflow.py
Not much to report here, other than I broke every single test in the system at one point. So I'm super concerned about this, and will be testing it a lot before creating the pull request.
From an API point of view you can do the following (and only the following)
/files?workflow_spec_id=x
* You can find all files associated with a workflow_spec_id, and add a file with a workflow_spec_id
/files?workflow_id=x
* You can find all files associated with a workflow_id, and add a file that is directly associated with the workflow
/files?workflow_id=x&form_field_key=y
* You can find all files associated with a form element on a running workflow, and add a new file.
Note: you can add multiple files to the same form_field_key, IF they have different file names. If the same name, the original file is archived,
and the new file takes its place.
The study endpoints always return a list of the file metadata associated with the study. Removed /studies-files, but there is an
endpoint called
/studies/all - that returns all the studies in the system, and does include their files.
On a deeper level:
The File model no longer contains:
- study_id,
- task_id,
- form_field_key
Instead, if the file is associated with workflow - then that is the one way it is connected to the study, and we use this relationship to find files for a study.
A file is never associated with a task_id, as these change when the workflow is reloaded.
The form_field_key must match the irb_doc_code, so when requesting files for a form field, we just look up the irb_doc_code.
Modifed the request_approval to take a list of arguments, which works better for us... today.
UpdateStudy correctly handles validation.
WorkflowService correctly populates random values from lookup tables.
And several fixes down in Spiffworkflow, including a big bug where only the last item in a decision table made it through.
Fixing a bug where the validation of forms did not correctly process auto-complete fields.
Fixing a bug where the approvals script and the update study script could not process dot notation correctly.
Moved populate_random_data into the WorkflowService where it makes more sense.
Added the following columns:
* date_created - so we know when the file was created
* renamed workflow_version to just "version", because everything has a version, this is the version of the request.
* workflow_hash - this is just a quick way to see what files and versions are associated with the request, it could be factored out.
* study - a quick relationship link to the study, so that this model is easier to use.
* workflow - ditto
* approval_files - these is a list from a new link table that links an approval to specific files and versions.
The RequestApproval is logically sound, but still needs some additional pieces in place to be callable from a BPMN workflow diagram.
Altered the file service to pick up on changes to files vs adding new files, so that versions are picked up correctly as
users modify their submission - adding new files or replacing existing ones. Deleting files worries me, and I will need to revisit this.
The damn base test keeps giving me a headache, so I made changes there to see if clearing and dropping the database each time won't allow the tests to pass more consistently.
Lots more tests around the file service to make sure it is versioning user uploaded files correctly.
The "Test Request Approval Script" tries to find to assure the correct behavior as this is likely to be called many times repeatedly and with little knowledge of the internal system. So it should just "do the right thing".
Improving the study_info script documentation to provide detailed examples of values returned based on arguments.
Making the tests a little more targetted and less subject to breaking through better mocks.
Allow all tests to pass even when ther protocol builder mock isn't running locally.
Removing the duplication of reference files in tests and static, as this seems silly to me at the moment.
Because this changes the endpoint for all existing document details, I've modified all the test and static bpmn files to use the new format.
Shorting up the SponsorsList.xls file makes for slightly faster tests. seems senseless to load 5000 everytime we reset the data.
Tried to test all of this carefully in the test_study_details_documents.py test.
Found a problem where the documentation for elements was being processed BEFORE data was loaded from a script. There still may be some issues here.
Ran into an issue with circular dependencies - handling it with a new workflow_service, and pulling computational logic out of the api_models - it was the right thing to do.