But, it returned warnings to the front end for debugging
Ticket 733 was created to address this.
Fix the warnings and return them again.
This also broke the tests in test_study_status_message, so we skip them for now
Merging in code to improve performance of calculating percent complete for a study.
Assureing we have a primary investigator for the front page (another merge)
Also: All script tasks should raise WorkflowTaskExecExceptions - NOT APIExceptions - this is because our scripts are executed by Spiff (not the other way around) so the errors need to pass fluidly through spiff, and come back to use THEN we can convert them to APIErrors. Otherwise we lose all kinds of good information about the error.
1. Avoid ever re-generating the list of scripts that can be used in a script task. Terribly expensive as we call eval constantly, and it never ever changes once the app starts. (see script.py changes, and comments)
2. Cache the DocumentStatus list in the flask session, so we calculate it at most once per API Call. It's at least .25 seconds per call. (see study_sevice)
3. We called UserFileService.get_files_for_study (which runs a db query EVERY time) for every possible document type. Now we run the query once (study service line 321)
4. When returning a workflow, we looped through every single task in that workflow's navigation, and called the expensive spiff_task_to_api_task just to figure out it's proper display name. We run a much faster and more efficient method to calculate the display name naow (see workflow_service on lie 680, and 799)
5. A hellton of @timeit and sincetime() calls, that I want to leave in, to help debug any slowness on production.
minor fixes - there is still some weird problems with study id and user id being null that I need to track down, but the issue is sporadic, and hard to track down.