1. Avoid ever re-generating the list of scripts that can be used in a script task. Terribly expensive as we call eval constantly, and it never ever changes once the app starts. (see script.py changes, and comments)
2. Cache the DocumentStatus list in the flask session, so we calculate it at most once per API Call. It's at least .25 seconds per call. (see study_sevice)
3. We called UserFileService.get_files_for_study (which runs a db query EVERY time) for every possible document type. Now we run the query once (study service line 321)
4. When returning a workflow, we looped through every single task in that workflow's navigation, and called the expensive spiff_task_to_api_task just to figure out it's proper display name. We run a much faster and more efficient method to calculate the display name naow (see workflow_service on lie 680, and 799)
5. A hellton of @timeit and sincetime() calls, that I want to leave in, to help debug any slowness on production.
Removing the spreadsheet.value.column and data.value.column so we just have value.column for both.
Improving the __str__ function in the ApiError class, to make debugging a little easier.
Adding a "validate_all" flask command, to help us track down any issues with current workflows in production (use this in concert with sync_with_testing)
Fixed logs of tests.
removed fact_runner.py, a very early and crufty bit of code.
Most noteable is the addition of the line on which the error occurs for script tasks. It will report the line number and pass back the content of
the line that failed.
The validator only returns the first error it encounters, as it's clear that all we ever get right now is two of the same error.
Did a lot of work between this and spiffworkflow to remove all the places where we obfuscate or drop details as we converted between workflowExceptions and APIExceptions.
Dropped the python levenshtein dependency, in favor of just rolling a simple one ourselves in Spiffworkflow.
Adding a DocumentService to clean up the FileService, and get Documents well seperated, as it seems likely be pulled out or seperated in the future, there is now a Documents api file as well, for the same reason.
Some other minor changes are just fixing white space to assure our code is linting correctly.
I removed _create_study_workflow_approvals from the base test, as we don't use approvals like this anymore.
* The Task.title returned to the front end will now attempt to process the "display_name" property for dot-notation syntax, making it possible to use this for multi-instance tasks,
but will work in all cases where we want he title to change based on values in the data model.
* Fixing a bug the test_study_api where it wasn't updated when we made recent changes to the different states of a study.
*