Upraded SpiffWorkflow and now use th new get_subprocess_specs
updated the calculate_stats in the workflow processor - as the serialization had changed drastically, and needed to debug some performance issues.
Added a get_navigation method that will calcuate a basic navigation list MUCH faster than using the get_flat_nav_list in Spiffworkflows Navigation object.
Modified a hellton of tests because we don't have total_task and completed_task counts, or a complex nested navigation list anymore.
Added a File class, that we wrap around the FileModel so the api endpoints don't change, but File no longer holds refences to versions or dates of the file_data model, we
figure this out based on a clean database structure.
The ApprovalFile is directly related to the file_data_model - so no chance that a reviewer would review the incorrect version of a file.py
Noticed that our FileType enum called "bpmn" "bpmm", hope this doesn't screw someone up.
Workflows are directly related to the data_models that create the workflow spec it needs. So the files should always be there. There are no more hashes, and thus no more hash errors where it can't find the files to rebuild the workflow.py
Not much to report here, other than I broke every single test in the system at one point. So I'm super concerned about this, and will be testing it a lot before creating the pull request.
Added the following columns:
* date_created - so we know when the file was created
* renamed workflow_version to just "version", because everything has a version, this is the version of the request.
* workflow_hash - this is just a quick way to see what files and versions are associated with the request, it could be factored out.
* study - a quick relationship link to the study, so that this model is easier to use.
* workflow - ditto
* approval_files - these is a list from a new link table that links an approval to specific files and versions.
The RequestApproval is logically sound, but still needs some additional pieces in place to be callable from a BPMN workflow diagram.
Altered the file service to pick up on changes to files vs adding new files, so that versions are picked up correctly as
users modify their submission - adding new files or replacing existing ones. Deleting files worries me, and I will need to revisit this.
The damn base test keeps giving me a headache, so I made changes there to see if clearing and dropping the database each time won't allow the tests to pass more consistently.
Lots more tests around the file service to make sure it is versioning user uploaded files correctly.
The "Test Request Approval Script" tries to find to assure the correct behavior as this is likely to be called many times repeatedly and with little knowledge of the internal system. So it should just "do the right thing".
I noticed we were saving the workflow every time we loaded it up, rather than only when we were making changes to it. Refactored this to be a little more careful.
Centralized the saving of the workflow into one location in the processor, so we can make sure we update all the details about that workflow every time we save.
The workflow service has a method that will log any task action taken in a consistent way.
The stats models were removed from the API completely. Will wait for a use case for dealing with this later.
Moving the primary process id from the workflow model to the file model, and assuring it is updated properly. This was causing a bug that would "lose" the workflow.