Adds an optional 'value' parameter to the lookup endpoint so you can find a specific entry in the lookup table.
Makes sure the data attribute returned on a lookup model is a dictionary, and not a string.
Fixes a previous bug that would crop up if double spaces were used when performing a search.
Adds an optional 'id' parameter to the lookup endpoint so you can find a specific entry in the lookup table.
Makes sure the data attribute returned on a lookup model is a dictionary, and not a string.
Fixes a previous bug that would crop up in double spaces were used when performing a search.
In order to allow proper deletion of tasks, we no longer merge data returned from the front end, we set it directly as the task_data.
When returning data to the front end, we take any previous form submission and merge it into the current task data, allowing users to keep their previous submissions.
There is now an "extract_form_data" method that does it's best job to calculate what form data might have changed from the front end.
* TaskEvents now contain the data for each event as it was when the task was completed.
* When loading a task for the front end, if the task was completed previously, we take that data, and overwrite it with the lastest data, allowing users to see previously entered values.
* Pulling in the Admin branch, as there are changes in that branch that are critical to seeing what is happening when we do this thing.
* Moved code for converting a workflow to an API ready data stricture into the Workflow service where it belongs, and out of the API.
* Hard resets just convert to using the latest spec, they don't try to keep the data from the last task. There is a better way.
* Moving to a previous task does not attept to keep the data from the last completed task.
* Added a function that will fix all the existing RRT data by adding critical data into the TaskEvent model. This can be called with from the flask command line tool.
We now cache the LDAP records - so we look in our own database for the record before calling out to ldap for the details when given a straight up computing id like dhf8r.
Added "date_approved" to the approval model.
And moved the approver and primary investigator into real associated models to make it easier to dump.
Fixed a problem with the validation that was causing it to throw incorrect errors on valid workflows. Getting it to behave a little more like the front end behaves, and respecting the read-only fields. But it was mainly to do with always returning all the data with each form submission.
Also, when returning error messages, attempt to include the task data for the task that caused the error.
Also, when attempting to delete any file, respond with an API error explaining the issue, and log the details.
Another speed improvement - data in the FileDataModel is deferred, and not queried until it is specifically used, as the new data structures need to use this model frequently.
Added a File class, that we wrap around the FileModel so the api endpoints don't change, but File no longer holds refences to versions or dates of the file_data model, we
figure this out based on a clean database structure.
The ApprovalFile is directly related to the file_data_model - so no chance that a reviewer would review the incorrect version of a file.py
Noticed that our FileType enum called "bpmn" "bpmm", hope this doesn't screw someone up.
Workflows are directly related to the data_models that create the workflow spec it needs. So the files should always be there. There are no more hashes, and thus no more hash errors where it can't find the files to rebuild the workflow.py
Not much to report here, other than I broke every single test in the system at one point. So I'm super concerned about this, and will be testing it a lot before creating the pull request.