Commit Graph

18 Commits

Author SHA1 Message Date
Dan Funk dba41f4759 Ludicrously stupid launch in a refactor of the way all files work in the system at a time where I crave sleep and peace above all other things.
Added a File class, that we wrap around the FileModel so the api endpoints don't change, but File no longer holds refences to versions or dates of the file_data model, we
figure this out based on a clean database structure.

The ApprovalFile is directly related to the file_data_model - so no chance that a reviewer would review the incorrect version of a file.py

Noticed that our FileType enum called "bpmn" "bpmm", hope this doesn't screw someone up.

Workflows are directly related to the data_models that create the workflow spec it needs.  So the files should always be there.  There are no more hashes, and thus no more hash errors where it can't find the files to rebuild the workflow.py

Not much to report here, other than I broke every single test in the system at one point.  So I'm super concerned about this, and will be testing it a lot before creating the pull request.
2020-05-28 20:03:50 -04:00
Dan Funk cd7f67ab48 A major refactor of how we search and store files, as there was a lot of confusing bits in here.
From an API point of view you can do the following (and only the following)

/files?workflow_spec_id=x
* You can find all files associated with a workflow_spec_id, and add a file with a workflow_spec_id
/files?workflow_id=x
* You can find all files associated with a workflow_id, and add a file that is directly associated with the workflow
/files?workflow_id=x&form_field_key=y
* You can find all files associated with a form element on a running workflow, and add a new file.
   Note: you can add multiple files to the same form_field_key, IF they have different file names. If the same name, the original file is archived,
   and the new file takes its place.

The study endpoints always return a list of the file metadata associated with the study.  Removed /studies-files, but there is an
endpoint called

/studies/all  - that returns all the studies in the system, and does include their files.

On a deeper level:
 The File model no longer contains:
  - study_id,
  - task_id,
  - form_field_key

Instead, if the file is associated with workflow - then that is the one way it is connected to the study, and we use this relationship to find files for a study.
A file is never associated with a task_id, as these change when the workflow is reloaded.
The form_field_key must match the irb_doc_code, so when requesting files for a form field, we just look up the irb_doc_code.
2020-05-28 08:27:26 -04:00
Dan Funk 6cd4ef64d1 Fixing add_study api endpoint, so you can actually add a new "Study" with just some basic information.
Using the LDAP service for checking user details in development mode - even if you are using the back door.
Added a new Flask fucntion load-example-rrt-data that loads the rrt workflow, and not the CRC wrokflows.
Modified the "load-example-data" in the tests to use some test data, rather than loading up all the workflows[
in CRC each time, with a parameter to load crc data if that is required - which is enabled for just a handful of tests.
(Tests run in 1/4 the time now)
2020-05-25 12:29:05 -04:00
Carlos Lopez 49eb4b3f98 Making working endpoints for approvals 2020-05-23 23:53:48 -06:00
Dan Funk da7cae51b8 Adding a new reference file that provides greater details about the investigators related to a study.
Improving the study_info script documentation to provide detailed examples of values returned based on arguments.
Making the tests a little more targetted and less subject to breaking through better mocks.
Allow all tests to pass even when ther protocol builder mock isn't running locally.
Removing the duplication of reference files in tests and static, as this seems silly to me at the moment.
2020-05-07 13:57:24 -04:00
Aaron Louie 496e5b7719 Updates all workflow specs to match staging 2020-04-27 22:54:05 -04:00
Dan Funk 241980f98f If you name add a file to a workflow that has the exact same name as a Task Spec's ID, and an extension of "md", it wll use that file as the markdown content, and ignore the markdown in the documentation on the task spec.
Moving the primary process id from the workflow model to the file model, and assuring it is updated properly.  This was causing a bug that would "lose" the workflow.
2020-04-17 13:30:32 -04:00
Aaron Louie faf4c0df97 Updates BPMN and DMN files 2020-04-15 10:58:13 -04:00
Dan Funk 4a916c1ee3 Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.

Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.

The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.

Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.

Example data no longer creates users or studies, it just creates the specs.
2020-03-30 08:00:16 -04:00
Dan Funk e2c408b70d Removed all self-referential calls in the study_api. One api endpoint should never call another api endpoint. Moved the logic for updating a study to the study Model, rather than checking and setting dictionary values which will become very hard to maintain.
The protocol builder service now returns real models, not dictionaries, forcing proper validation and fail-fast behavior.
Changed the name of the "status" spec, to "top_level_workflow" and removing any connection a workflow or study has with this specification.  It is only unused to determine status in real time, and is not reused or tracked.
Modified the required documents script to return a dictionary and not an array, making it easier to speak to specific values in the BPMN and DMN.
Working on new ways to test the top_level_workflow in the context of updates, this is still a work in progress.
Making use of several modifications to the Spiff library that enables more complex expressions in DMN models. This is evident in the new DMN models for the top_level_workflow
2020-03-26 12:51:53 -04:00
Dan Funk f4342fc785 It became impossible to use the Swagger ui when we started adding authentication to all of the calls. I discovered Connexion and Swagger have a default way of handing JTW authentication and this cleans up our code quite a bit, moves the securing of endpoints into the API Definition, which is quite nice, and removes a whole library dependency (I never get to do that!) I added a SWAGGER_AUTH_KEY that can be used in non-production environments to allow users to quickly authenticate from the Swagger ui. I also removed all api calls to simple little happy api services, because that is just mean and pointless. 2020-03-24 14:15:21 -04:00
Dan Funk b35427523d Merge remote-tracking branch 'origin/master' into feature/reference_files
# Conflicts:
#	crc/models/file.py
#	crc/services/file_service.py
#	tests/data/reference/irb_documents.xlsx
#	tests/test_files_api.py
2020-03-20 11:07:55 -04:00
Dan Funk 3eb1167b8e Found a few errors in the sqlalchemy file definition that was causing failures, and had some consistency problems with the IRB Categories file name. The API was bailing out because we had restricted file types to bpmn,svg,or dmn in the connexion config file, I don't restrict this anymore we have plenty of checks elsewhere. Adding xlrd as a dependency - this didn't fail till a push to production. 2020-03-20 08:21:21 -04:00
Dan Funk 6e3b6c2635 Assure that files uploaded through web forms and files generated from templates can be cross-referenced to known document requirements from the protocol builder. Configurators can control this by managing an XLS Spreadsheet called "irb_documents.xslx".
Required Documents is becoming complicated, so making this it's own script task, removing it from study_info.py
The file_service is now very aware of this irb_documents file, so it will always need to exist.  We seed this file
during setup, but it can be overwritten by the configurator.
2020-03-19 17:14:20 -04:00
Dan Funk 779674ab60 Add the ability to upload and request general reference files by name. These will be used across workflows and will frequently contain lookup tables that can be referenced by various script tasks. 2020-03-13 15:03:57 -04:00
Dan Funk c5cee4761e Improve version handling of files. Consolidate more of this logic in FileService. Place the version on the actual data model, not the file model, so the file model remains the same, and we just version the data associated with it. 2020-03-04 13:40:25 -05:00
Aaron Louie 5f944af0d7 Adds CR Connect training workflow specs 2020-02-28 15:39:44 -05:00
Dan Funk 2cc6010c8d Protocol builder connections 2020-02-20 13:30:04 -05:00