20 Commits

Author SHA1 Message Date
Dan Funk
6aec15cc7c Shifting to a different model, where the TaskEvents store ONLY the form data submitted for that task.
In order to allow proper deletion of tasks, we no longer merge data returned from the front end, we set it directly as the task_data.
When returning data to the front end, we take any previous form submission and merge it into the current task data, allowing users to keep their previous submissions.
There is now an "extract_form_data" method that does it's best job to calculate what form data might have changed from the front end.
2020-06-19 08:22:53 -04:00
Dan Funk
3b57adb84c Continuing a major refactor. Some important points:
* TaskEvents now contain the data for each event as it was when the task was completed.
* When loading a task for the front end, if the task was completed previously, we take that data, and overwrite it with the lastest data, allowing users to see previously entered values.
* Pulling in the Admin branch, as there are changes in that branch that are critical to seeing what is happening when we do this thing.
* Moved code for converting a workflow to an API ready data stricture into the Workflow service where it belongs, and out of the API.
* Hard resets just convert to using the latest spec, they don't try to keep the data from the last task.  There is a better way.
* Moving to a previous task does not attept to keep the data from the last completed task.
* Added a function that will fix all the existing RRT data by adding critical data into the TaskEvent model. This can be called with from the flask command line tool.
2020-06-17 17:11:15 -04:00
Carlos Lopez
a6758fd555 Removing deprecation warnings 2020-06-05 12:08:46 -06:00
Dan Funk
1f0e8741ba Run the validation twice, once completing all of the data, and a second time, completing only the required fields.
Also, add a helper method to reduce boiler plate code around Workflow Exceptions.
2020-05-30 17:26:27 -04:00
Dan Funk
11413838a7 Faster lookup fields. We were parsing the spec each time to get details about how to search. We're just grabbing the workflow id and task id now and building that straight into the full text search index for faster lookups. Should be peppy.
Another speed improvement - data in the FileDataModel is deferred, and not queried until it is specifically used, as the new data structures need to use this model frequently.
2020-05-29 01:39:39 -04:00
Dan Funk
560263d1a3 Missed another test. 2020-05-27 14:45:00 -04:00
Dan Funk
229b5d5ece Forgot to add the test fixes. 2020-05-27 09:55:46 -04:00
Dan Funk
be057e8758 Adding an "UpdateStudy" task that is able to update the data on the study model, useful for setting core data points on the model, such as setting the Primary Investigator, or altering the Study Title.
Fixing a bug where the validation of forms did not correctly process auto-complete fields.
Fixing a bug where the approvals script and the update study script could not process dot notation correctly.

Moved populate_random_data into the WorkflowService where it makes more sense.
2020-05-25 15:30:06 -04:00
Carlos Lopez
49eb4b3f98 Making working endpoints for approvals 2020-05-23 23:53:48 -06:00
Dan Funk
951710d762 ldap lookup.
Refactored calls into a new lookup_service to keep things tidy.

New keys for all enum/auto-complete fields:
    PROP_OPTIONS_FILE = "spreadsheet.name"
    PROP_OPTIONS_VALUE_COLUMN = "spreadsheet.value.column"
    PROP_OPTIONS_LABEL_COL = "spreadsheet.label.column"
    PROP_LDAP_LOOKUP = "ldap.lookup"
    FIELD_TYPE_AUTO_COMPLETE = "autocomplete"
2020-05-19 16:11:43 -04:00
Dan Funk
2699f5c65c Refactor the stats models, and assure they are very correct across all tests with the workflow api.
I noticed we were saving the workflow every time we loaded it up, rather than only when we were making changes to it.  Refactored this to be a little more careful.
Centralized the saving of the workflow into one location in the processor, so we can make sure we update all the details about that workflow every time we save.
The workflow service has a method that will log any task action taken in a consistent way.
The stats models were removed from the API completely.  Will wait for a use case for dealing with this later.
2020-05-04 10:57:09 -04:00
Dan Funk
12eb039bc9 Server isn't erroring out, but can't find the lookup table id in the database, so trying to use the in-memory model instead, to give things time to get to the database. Really unsure what is happening here. Hard to see in the database. 2020-04-24 07:01:32 -04:00
Dan Funk
3aeb7ad116 Server isn't erroring out, but can't find the lookup table id in the database, so trying to use the in-memory model instead, to give things time to get to the database. Really unsure what is happening here. Hard to see in the database. 2020-04-23 14:58:17 -04:00
Dan Funk
b5b46b7c2c better overall search results for type ahead. Still dealing with stop words failing. 2020-04-23 12:05:08 -04:00
Dan Funk
65b29e1a9d Don't just bomb out as soon as someone types an empty string. 2020-04-23 09:44:11 -04:00
Dan Funk
7b085c9c9d Adding an API Endpoint that will return a list of LookupValues that match a given query - can be used to populate an auto-complete table. 2020-04-22 19:40:40 -04:00
Dan Funk
6de8c8b977 Create lookup tables for XSL files referenced in workflows so we can do full text searches and populate lists on the fly quickly. 2020-04-22 15:37:02 -04:00
Dan Funk
241980f98f If you name add a file to a workflow that has the exact same name as a Task Spec's ID, and an extension of "md", it wll use that file as the markdown content, and ignore the markdown in the documentation on the task spec.
Moving the primary process id from the workflow model to the file model, and assuring it is updated properly.  This was causing a bug that would "lose" the workflow.
2020-04-17 13:30:32 -04:00
Dan Funk
c485ff5be6 fixing a test. 2020-04-15 11:59:02 -04:00
Dan Funk
dc2895cb05 Allow configurators to upload xls files into a workflow for defining enumrations of values for dropdown lists in forms. Fixing lots of tests.
Found a problem where the documentation for elements was being processed BEFORE data was loaded from a script.  There still may be some issues here.

Ran into an issue with circular dependencies - handling it with a new workflow_service, and pulling computational logic out of the api_models - it was the right thing to do.
2020-04-15 11:13:32 -04:00