2020-04-15 15:13:32 +00:00
|
|
|
import json
|
2020-05-04 14:57:09 +00:00
|
|
|
from datetime import datetime
|
2020-04-15 15:13:32 +00:00
|
|
|
from unittest.mock import patch
|
|
|
|
|
A major refactor of how we search and store files, as there was a lot of confusing bits in here.
From an API point of view you can do the following (and only the following)
/files?workflow_spec_id=x
* You can find all files associated with a workflow_spec_id, and add a file with a workflow_spec_id
/files?workflow_id=x
* You can find all files associated with a workflow_id, and add a file that is directly associated with the workflow
/files?workflow_id=x&form_field_key=y
* You can find all files associated with a form element on a running workflow, and add a new file.
Note: you can add multiple files to the same form_field_key, IF they have different file names. If the same name, the original file is archived,
and the new file takes its place.
The study endpoints always return a list of the file metadata associated with the study. Removed /studies-files, but there is an
endpoint called
/studies/all - that returns all the studies in the system, and does include their files.
On a deeper level:
The File model no longer contains:
- study_id,
- task_id,
- form_field_key
Instead, if the file is associated with workflow - then that is the one way it is connected to the study, and we use this relationship to find files for a study.
A file is never associated with a task_id, as these change when the workflow is reloaded.
The form_field_key must match the irb_doc_code, so when requesting files for a form field, we just look up the irb_doc_code.
2020-05-28 12:27:26 +00:00
|
|
|
from tests.base_test import BaseTest
|
|
|
|
|
2020-05-27 02:42:49 +00:00
|
|
|
from crc import db, app
|
2020-04-03 20:24:38 +00:00
|
|
|
from crc.models.protocol_builder import ProtocolBuilderStatus
|
|
|
|
from crc.models.study import StudyModel
|
2020-03-30 18:01:57 +00:00
|
|
|
from crc.models.user import UserModel
|
2020-04-03 20:24:38 +00:00
|
|
|
from crc.models.workflow import WorkflowModel, WorkflowStatus, \
|
2020-03-30 14:12:10 +00:00
|
|
|
WorkflowSpecCategoryModel
|
Refactor the document details scripts. Now there is one script, it returns data in a consistent format, and has all the details required. The script is located in StudyInfo, with the argument documents. Make note that it returns a dictionary of ALL the documents, with a field to mark which ones are required according to the protocol builder. Others may become required if a workflow determines such, in which case the workflow will enforce this, and the document will have a count > 0, and additional details in a list of files within the document. I modified the XLS file to use lower case variable names, because it disturbed me, and we have to reference them frequently. Removed devious "as_object" variable on get_required_docs, so it behaves like the other methods all the time, and returns a dictionary. All the core business logic for finding the documents list now resides in the StudyService.
Because this changes the endpoint for all existing document details, I've modified all the test and static bpmn files to use the new format.
Shorting up the SponsorsList.xls file makes for slightly faster tests. seems senseless to load 5000 everytime we reset the data.
Tried to test all of this carefully in the test_study_details_documents.py test.
2020-04-29 19:08:11 +00:00
|
|
|
from crc.services.file_service import FileService
|
2020-03-30 18:01:57 +00:00
|
|
|
from crc.services.study_service import StudyService
|
|
|
|
from crc.services.workflow_processor import WorkflowProcessor
|
2020-03-30 19:39:50 +00:00
|
|
|
from example_data import ExampleDataLoader
|
2020-03-30 14:12:10 +00:00
|
|
|
|
|
|
|
|
|
|
|
class TestStudyService(BaseTest):
|
2020-03-30 18:01:57 +00:00
|
|
|
"""Largely tested via the test_study_api, and time is tight, but adding new tests here."""
|
2020-03-30 14:12:10 +00:00
|
|
|
|
Refactor the document details scripts. Now there is one script, it returns data in a consistent format, and has all the details required. The script is located in StudyInfo, with the argument documents. Make note that it returns a dictionary of ALL the documents, with a field to mark which ones are required according to the protocol builder. Others may become required if a workflow determines such, in which case the workflow will enforce this, and the document will have a count > 0, and additional details in a list of files within the document. I modified the XLS file to use lower case variable names, because it disturbed me, and we have to reference them frequently. Removed devious "as_object" variable on get_required_docs, so it behaves like the other methods all the time, and returns a dictionary. All the core business logic for finding the documents list now resides in the StudyService.
Because this changes the endpoint for all existing document details, I've modified all the test and static bpmn files to use the new format.
Shorting up the SponsorsList.xls file makes for slightly faster tests. seems senseless to load 5000 everytime we reset the data.
Tried to test all of this carefully in the test_study_details_documents.py test.
2020-04-29 19:08:11 +00:00
|
|
|
def create_user_with_study_and_workflow(self):
|
2020-03-30 19:39:50 +00:00
|
|
|
|
Refactor the document details scripts. Now there is one script, it returns data in a consistent format, and has all the details required. The script is located in StudyInfo, with the argument documents. Make note that it returns a dictionary of ALL the documents, with a field to mark which ones are required according to the protocol builder. Others may become required if a workflow determines such, in which case the workflow will enforce this, and the document will have a count > 0, and additional details in a list of files within the document. I modified the XLS file to use lower case variable names, because it disturbed me, and we have to reference them frequently. Removed devious "as_object" variable on get_required_docs, so it behaves like the other methods all the time, and returns a dictionary. All the core business logic for finding the documents list now resides in the StudyService.
Because this changes the endpoint for all existing document details, I've modified all the test and static bpmn files to use the new format.
Shorting up the SponsorsList.xls file makes for slightly faster tests. seems senseless to load 5000 everytime we reset the data.
Tried to test all of this carefully in the test_study_details_documents.py test.
2020-04-29 19:08:11 +00:00
|
|
|
# clear it all out.
|
|
|
|
from example_data import ExampleDataLoader
|
|
|
|
ExampleDataLoader.clean_db()
|
2020-04-15 15:13:32 +00:00
|
|
|
|
2020-03-30 19:39:50 +00:00
|
|
|
# Assure some basic models are in place, This is a damn mess. Our database models need an overhaul to make
|
|
|
|
# this easier - better relationship modeling is now critical.
|
|
|
|
self.load_test_spec("top_level_workflow", master_spec=True)
|
Refactor the document details scripts. Now there is one script, it returns data in a consistent format, and has all the details required. The script is located in StudyInfo, with the argument documents. Make note that it returns a dictionary of ALL the documents, with a field to mark which ones are required according to the protocol builder. Others may become required if a workflow determines such, in which case the workflow will enforce this, and the document will have a count > 0, and additional details in a list of files within the document. I modified the XLS file to use lower case variable names, because it disturbed me, and we have to reference them frequently. Removed devious "as_object" variable on get_required_docs, so it behaves like the other methods all the time, and returns a dictionary. All the core business logic for finding the documents list now resides in the StudyService.
Because this changes the endpoint for all existing document details, I've modified all the test and static bpmn files to use the new format.
Shorting up the SponsorsList.xls file makes for slightly faster tests. seems senseless to load 5000 everytime we reset the data.
Tried to test all of this carefully in the test_study_details_documents.py test.
2020-04-29 19:08:11 +00:00
|
|
|
user = db.session.query(UserModel).filter(UserModel.uid == "dhf8r").first()
|
|
|
|
if not user:
|
|
|
|
user = UserModel(uid="dhf8r", email_address="whatever@stuff.com", display_name="Stayathome Smellalots")
|
|
|
|
db.session.add(user)
|
|
|
|
db.session.commit()
|
|
|
|
else:
|
|
|
|
for study in db.session.query(StudyModel).all():
|
|
|
|
StudyService().delete_study(study.id)
|
|
|
|
|
2020-04-21 21:13:30 +00:00
|
|
|
study = StudyModel(title="My title", protocol_builder_status=ProtocolBuilderStatus.ACTIVE, user_uid=user.uid)
|
Refactor the document details scripts. Now there is one script, it returns data in a consistent format, and has all the details required. The script is located in StudyInfo, with the argument documents. Make note that it returns a dictionary of ALL the documents, with a field to mark which ones are required according to the protocol builder. Others may become required if a workflow determines such, in which case the workflow will enforce this, and the document will have a count > 0, and additional details in a list of files within the document. I modified the XLS file to use lower case variable names, because it disturbed me, and we have to reference them frequently. Removed devious "as_object" variable on get_required_docs, so it behaves like the other methods all the time, and returns a dictionary. All the core business logic for finding the documents list now resides in the StudyService.
Because this changes the endpoint for all existing document details, I've modified all the test and static bpmn files to use the new format.
Shorting up the SponsorsList.xls file makes for slightly faster tests. seems senseless to load 5000 everytime we reset the data.
Tried to test all of this carefully in the test_study_details_documents.py test.
2020-04-29 19:08:11 +00:00
|
|
|
db.session.add(study)
|
2020-04-23 18:40:05 +00:00
|
|
|
cat = WorkflowSpecCategoryModel(name="approvals", display_name="Approvals", display_order=0)
|
Refactor the document details scripts. Now there is one script, it returns data in a consistent format, and has all the details required. The script is located in StudyInfo, with the argument documents. Make note that it returns a dictionary of ALL the documents, with a field to mark which ones are required according to the protocol builder. Others may become required if a workflow determines such, in which case the workflow will enforce this, and the document will have a count > 0, and additional details in a list of files within the document. I modified the XLS file to use lower case variable names, because it disturbed me, and we have to reference them frequently. Removed devious "as_object" variable on get_required_docs, so it behaves like the other methods all the time, and returns a dictionary. All the core business logic for finding the documents list now resides in the StudyService.
Because this changes the endpoint for all existing document details, I've modified all the test and static bpmn files to use the new format.
Shorting up the SponsorsList.xls file makes for slightly faster tests. seems senseless to load 5000 everytime we reset the data.
Tried to test all of this carefully in the test_study_details_documents.py test.
2020-04-29 19:08:11 +00:00
|
|
|
db.session.add(cat)
|
2020-03-30 19:39:50 +00:00
|
|
|
db.session.commit()
|
2020-04-23 18:40:05 +00:00
|
|
|
|
|
|
|
self.assertIsNotNone(cat.id)
|
2020-03-30 19:39:50 +00:00
|
|
|
self.load_test_spec("random_fact", category_id=cat.id)
|
2020-04-23 18:40:05 +00:00
|
|
|
|
|
|
|
self.assertIsNotNone(study.id)
|
2020-05-04 14:57:09 +00:00
|
|
|
workflow = WorkflowModel(workflow_spec_id="random_fact", study_id=study.id,
|
|
|
|
status=WorkflowStatus.not_started, last_updated=datetime.now())
|
2020-03-30 19:39:50 +00:00
|
|
|
db.session.add(workflow)
|
|
|
|
db.session.commit()
|
|
|
|
# Assure there is a master specification, one standard spec, and lookup tables.
|
|
|
|
ExampleDataLoader().load_reference_documents()
|
Refactor the document details scripts. Now there is one script, it returns data in a consistent format, and has all the details required. The script is located in StudyInfo, with the argument documents. Make note that it returns a dictionary of ALL the documents, with a field to mark which ones are required according to the protocol builder. Others may become required if a workflow determines such, in which case the workflow will enforce this, and the document will have a count > 0, and additional details in a list of files within the document. I modified the XLS file to use lower case variable names, because it disturbed me, and we have to reference them frequently. Removed devious "as_object" variable on get_required_docs, so it behaves like the other methods all the time, and returns a dictionary. All the core business logic for finding the documents list now resides in the StudyService.
Because this changes the endpoint for all existing document details, I've modified all the test and static bpmn files to use the new format.
Shorting up the SponsorsList.xls file makes for slightly faster tests. seems senseless to load 5000 everytime we reset the data.
Tried to test all of this carefully in the test_study_details_documents.py test.
2020-04-29 19:08:11 +00:00
|
|
|
return user
|
|
|
|
|
|
|
|
@patch('crc.services.protocol_builder.ProtocolBuilderService.get_required_docs') # mock_docs
|
|
|
|
def test_total_tasks_updated(self, mock_docs):
|
|
|
|
"""Assure that as a users progress is available when getting a list of studies for that user."""
|
2020-05-27 02:42:49 +00:00
|
|
|
app.config['PB_ENABLED'] = True
|
Refactor the document details scripts. Now there is one script, it returns data in a consistent format, and has all the details required. The script is located in StudyInfo, with the argument documents. Make note that it returns a dictionary of ALL the documents, with a field to mark which ones are required according to the protocol builder. Others may become required if a workflow determines such, in which case the workflow will enforce this, and the document will have a count > 0, and additional details in a list of files within the document. I modified the XLS file to use lower case variable names, because it disturbed me, and we have to reference them frequently. Removed devious "as_object" variable on get_required_docs, so it behaves like the other methods all the time, and returns a dictionary. All the core business logic for finding the documents list now resides in the StudyService.
Because this changes the endpoint for all existing document details, I've modified all the test and static bpmn files to use the new format.
Shorting up the SponsorsList.xls file makes for slightly faster tests. seems senseless to load 5000 everytime we reset the data.
Tried to test all of this carefully in the test_study_details_documents.py test.
2020-04-29 19:08:11 +00:00
|
|
|
docs_response = self.protocol_builder_response('required_docs.json')
|
|
|
|
mock_docs.return_value = json.loads(docs_response)
|
|
|
|
|
|
|
|
user = self.create_user_with_study_and_workflow()
|
2020-03-30 18:01:57 +00:00
|
|
|
|
|
|
|
# The load example data script should set us up a user and at least one study, one category, and one workflow.
|
|
|
|
studies = StudyService.get_studies_for_user(user)
|
2020-03-30 19:39:50 +00:00
|
|
|
self.assertTrue(len(studies) == 1)
|
|
|
|
self.assertTrue(len(studies[0].categories) == 1)
|
|
|
|
self.assertTrue(len(studies[0].categories[0].workflows) == 1)
|
2020-03-30 18:01:57 +00:00
|
|
|
|
|
|
|
workflow = next(iter(studies[0].categories[0].workflows)) # Workflows is a set.
|
|
|
|
|
|
|
|
# workflow should not be started, and it should have 0 completed tasks, and 0 total tasks.
|
|
|
|
self.assertEqual(WorkflowStatus.not_started, workflow.status)
|
|
|
|
self.assertEqual(0, workflow.total_tasks)
|
|
|
|
self.assertEqual(0, workflow.completed_tasks)
|
|
|
|
|
|
|
|
# Initialize the Workflow with the workflow processor.
|
|
|
|
workflow_model = db.session.query(WorkflowModel).filter(WorkflowModel.id == workflow.id).first()
|
|
|
|
processor = WorkflowProcessor(workflow_model)
|
|
|
|
|
|
|
|
# Assure the workflow is now started, and knows the total and completed tasks.
|
|
|
|
studies = StudyService.get_studies_for_user(user)
|
|
|
|
workflow = next(iter(studies[0].categories[0].workflows)) # Workflows is a set.
|
|
|
|
# self.assertEqual(WorkflowStatus.user_input_required, workflow.status)
|
|
|
|
self.assertTrue(workflow.total_tasks > 0)
|
|
|
|
self.assertEqual(0, workflow.completed_tasks)
|
|
|
|
self.assertIsNotNone(workflow.spec_version)
|
|
|
|
|
|
|
|
# Complete a task
|
|
|
|
task = processor.next_task()
|
|
|
|
processor.complete_task(task)
|
2020-05-04 14:57:09 +00:00
|
|
|
processor.save()
|
2020-03-30 18:01:57 +00:00
|
|
|
|
|
|
|
# Assure the workflow has moved on to the next task.
|
|
|
|
studies = StudyService.get_studies_for_user(user)
|
|
|
|
workflow = next(iter(studies[0].categories[0].workflows)) # Workflows is a set.
|
|
|
|
self.assertEqual(1, workflow.completed_tasks)
|
2020-04-23 18:40:05 +00:00
|
|
|
|
|
|
|
# Get approvals
|
|
|
|
approvals = StudyService.get_approvals(studies[0].id)
|
|
|
|
self.assertGreater(len(approvals), 0)
|
2020-04-24 12:54:14 +00:00
|
|
|
self.assertIsNotNone(approvals[0]['display_order'])
|
Refactor the document details scripts. Now there is one script, it returns data in a consistent format, and has all the details required. The script is located in StudyInfo, with the argument documents. Make note that it returns a dictionary of ALL the documents, with a field to mark which ones are required according to the protocol builder. Others may become required if a workflow determines such, in which case the workflow will enforce this, and the document will have a count > 0, and additional details in a list of files within the document. I modified the XLS file to use lower case variable names, because it disturbed me, and we have to reference them frequently. Removed devious "as_object" variable on get_required_docs, so it behaves like the other methods all the time, and returns a dictionary. All the core business logic for finding the documents list now resides in the StudyService.
Because this changes the endpoint for all existing document details, I've modified all the test and static bpmn files to use the new format.
Shorting up the SponsorsList.xls file makes for slightly faster tests. seems senseless to load 5000 everytime we reset the data.
Tried to test all of this carefully in the test_study_details_documents.py test.
2020-04-29 19:08:11 +00:00
|
|
|
|
|
|
|
@patch('crc.services.protocol_builder.ProtocolBuilderService.get_required_docs') # mock_docs
|
|
|
|
def test_get_required_docs(self, mock_docs):
|
2020-05-27 02:42:49 +00:00
|
|
|
app.config['PB_ENABLED'] = True
|
Refactor the document details scripts. Now there is one script, it returns data in a consistent format, and has all the details required. The script is located in StudyInfo, with the argument documents. Make note that it returns a dictionary of ALL the documents, with a field to mark which ones are required according to the protocol builder. Others may become required if a workflow determines such, in which case the workflow will enforce this, and the document will have a count > 0, and additional details in a list of files within the document. I modified the XLS file to use lower case variable names, because it disturbed me, and we have to reference them frequently. Removed devious "as_object" variable on get_required_docs, so it behaves like the other methods all the time, and returns a dictionary. All the core business logic for finding the documents list now resides in the StudyService.
Because this changes the endpoint for all existing document details, I've modified all the test and static bpmn files to use the new format.
Shorting up the SponsorsList.xls file makes for slightly faster tests. seems senseless to load 5000 everytime we reset the data.
Tried to test all of this carefully in the test_study_details_documents.py test.
2020-04-29 19:08:11 +00:00
|
|
|
# mock out the protocol builder
|
|
|
|
docs_response = self.protocol_builder_response('required_docs.json')
|
|
|
|
mock_docs.return_value = json.loads(docs_response)
|
|
|
|
|
|
|
|
user = self.create_user_with_study_and_workflow()
|
|
|
|
studies = StudyService.get_studies_for_user(user)
|
|
|
|
study = studies[0]
|
|
|
|
|
|
|
|
|
|
|
|
study_service = StudyService()
|
|
|
|
documents = study_service.get_documents_status(study_id=study.id) # Mocked out, any random study id works.
|
|
|
|
self.assertIsNotNone(documents)
|
|
|
|
self.assertTrue("UVACompl_PRCAppr" in documents.keys())
|
|
|
|
self.assertEqual("UVACompl_PRCAppr", documents["UVACompl_PRCAppr"]['code'])
|
|
|
|
self.assertEqual("UVA Compliance / PRC Approval", documents["UVACompl_PRCAppr"]['display_name'])
|
|
|
|
self.assertEqual("Cancer Center's PRC Approval Form", documents["UVACompl_PRCAppr"]['description'])
|
|
|
|
self.assertEqual("UVA Compliance", documents["UVACompl_PRCAppr"]['category1'])
|
|
|
|
self.assertEqual("PRC Approval", documents["UVACompl_PRCAppr"]['category2'])
|
|
|
|
self.assertEqual("", documents["UVACompl_PRCAppr"]['category3'])
|
|
|
|
self.assertEqual("CRC", documents["UVACompl_PRCAppr"]['Who Uploads?'])
|
|
|
|
self.assertEqual(0, documents["UVACompl_PRCAppr"]['count'])
|
|
|
|
self.assertEqual(True, documents["UVACompl_PRCAppr"]['required'])
|
|
|
|
self.assertEqual('6', documents["UVACompl_PRCAppr"]['id'])
|
|
|
|
|
|
|
|
@patch('crc.services.protocol_builder.ProtocolBuilderService.get_required_docs') # mock_docs
|
|
|
|
def test_get_documents_has_file_details(self, mock_docs):
|
|
|
|
|
|
|
|
# mock out the protocol builder
|
|
|
|
docs_response = self.protocol_builder_response('required_docs.json')
|
|
|
|
mock_docs.return_value = json.loads(docs_response)
|
|
|
|
|
|
|
|
user = self.create_user_with_study_and_workflow()
|
|
|
|
|
|
|
|
# Add a document to the study with the correct code.
|
|
|
|
workflow = self.create_workflow('docx')
|
|
|
|
irb_code = "UVACompl_PRCAppr" # The first file referenced in pb required docs.
|
A major refactor of how we search and store files, as there was a lot of confusing bits in here.
From an API point of view you can do the following (and only the following)
/files?workflow_spec_id=x
* You can find all files associated with a workflow_spec_id, and add a file with a workflow_spec_id
/files?workflow_id=x
* You can find all files associated with a workflow_id, and add a file that is directly associated with the workflow
/files?workflow_id=x&form_field_key=y
* You can find all files associated with a form element on a running workflow, and add a new file.
Note: you can add multiple files to the same form_field_key, IF they have different file names. If the same name, the original file is archived,
and the new file takes its place.
The study endpoints always return a list of the file metadata associated with the study. Removed /studies-files, but there is an
endpoint called
/studies/all - that returns all the studies in the system, and does include their files.
On a deeper level:
The File model no longer contains:
- study_id,
- task_id,
- form_field_key
Instead, if the file is associated with workflow - then that is the one way it is connected to the study, and we use this relationship to find files for a study.
A file is never associated with a task_id, as these change when the workflow is reloaded.
The form_field_key must match the irb_doc_code, so when requesting files for a form field, we just look up the irb_doc_code.
2020-05-28 12:27:26 +00:00
|
|
|
FileService.add_workflow_file(workflow_id=workflow.id,
|
|
|
|
name="anything.png", content_type="text",
|
|
|
|
binary_data=b'1234', irb_doc_code=irb_code)
|
Refactor the document details scripts. Now there is one script, it returns data in a consistent format, and has all the details required. The script is located in StudyInfo, with the argument documents. Make note that it returns a dictionary of ALL the documents, with a field to mark which ones are required according to the protocol builder. Others may become required if a workflow determines such, in which case the workflow will enforce this, and the document will have a count > 0, and additional details in a list of files within the document. I modified the XLS file to use lower case variable names, because it disturbed me, and we have to reference them frequently. Removed devious "as_object" variable on get_required_docs, so it behaves like the other methods all the time, and returns a dictionary. All the core business logic for finding the documents list now resides in the StudyService.
Because this changes the endpoint for all existing document details, I've modified all the test and static bpmn files to use the new format.
Shorting up the SponsorsList.xls file makes for slightly faster tests. seems senseless to load 5000 everytime we reset the data.
Tried to test all of this carefully in the test_study_details_documents.py test.
2020-04-29 19:08:11 +00:00
|
|
|
|
|
|
|
docs = StudyService().get_documents_status(workflow.study_id)
|
|
|
|
self.assertIsNotNone(docs)
|
|
|
|
self.assertEqual("not_started", docs["UVACompl_PRCAppr"]['status'])
|
|
|
|
self.assertEqual(1, docs["UVACompl_PRCAppr"]['count'])
|
|
|
|
self.assertIsNotNone(docs["UVACompl_PRCAppr"]['files'][0])
|
|
|
|
self.assertIsNotNone(docs["UVACompl_PRCAppr"]['files'][0]['file_id'])
|
2020-06-05 18:08:46 +00:00
|
|
|
self.assertEqual(workflow.id, docs["UVACompl_PRCAppr"]['files'][0]['workflow_id'])
|
Refactor the document details scripts. Now there is one script, it returns data in a consistent format, and has all the details required. The script is located in StudyInfo, with the argument documents. Make note that it returns a dictionary of ALL the documents, with a field to mark which ones are required according to the protocol builder. Others may become required if a workflow determines such, in which case the workflow will enforce this, and the document will have a count > 0, and additional details in a list of files within the document. I modified the XLS file to use lower case variable names, because it disturbed me, and we have to reference them frequently. Removed devious "as_object" variable on get_required_docs, so it behaves like the other methods all the time, and returns a dictionary. All the core business logic for finding the documents list now resides in the StudyService.
Because this changes the endpoint for all existing document details, I've modified all the test and static bpmn files to use the new format.
Shorting up the SponsorsList.xls file makes for slightly faster tests. seems senseless to load 5000 everytime we reset the data.
Tried to test all of this carefully in the test_study_details_documents.py test.
2020-04-29 19:08:11 +00:00
|
|
|
|
A major refactor of how we search and store files, as there was a lot of confusing bits in here.
From an API point of view you can do the following (and only the following)
/files?workflow_spec_id=x
* You can find all files associated with a workflow_spec_id, and add a file with a workflow_spec_id
/files?workflow_id=x
* You can find all files associated with a workflow_id, and add a file that is directly associated with the workflow
/files?workflow_id=x&form_field_key=y
* You can find all files associated with a form element on a running workflow, and add a new file.
Note: you can add multiple files to the same form_field_key, IF they have different file names. If the same name, the original file is archived,
and the new file takes its place.
The study endpoints always return a list of the file metadata associated with the study. Removed /studies-files, but there is an
endpoint called
/studies/all - that returns all the studies in the system, and does include their files.
On a deeper level:
The File model no longer contains:
- study_id,
- task_id,
- form_field_key
Instead, if the file is associated with workflow - then that is the one way it is connected to the study, and we use this relationship to find files for a study.
A file is never associated with a task_id, as these change when the workflow is reloaded.
The form_field_key must match the irb_doc_code, so when requesting files for a form field, we just look up the irb_doc_code.
2020-05-28 12:27:26 +00:00
|
|
|
def test_get_all_studies(self):
|
|
|
|
user = self.create_user_with_study_and_workflow()
|
2020-06-11 20:39:00 +00:00
|
|
|
study = db.session.query(StudyModel).filter_by(user_uid=user.uid).first()
|
|
|
|
self.assertIsNotNone(study)
|
A major refactor of how we search and store files, as there was a lot of confusing bits in here.
From an API point of view you can do the following (and only the following)
/files?workflow_spec_id=x
* You can find all files associated with a workflow_spec_id, and add a file with a workflow_spec_id
/files?workflow_id=x
* You can find all files associated with a workflow_id, and add a file that is directly associated with the workflow
/files?workflow_id=x&form_field_key=y
* You can find all files associated with a form element on a running workflow, and add a new file.
Note: you can add multiple files to the same form_field_key, IF they have different file names. If the same name, the original file is archived,
and the new file takes its place.
The study endpoints always return a list of the file metadata associated with the study. Removed /studies-files, but there is an
endpoint called
/studies/all - that returns all the studies in the system, and does include their files.
On a deeper level:
The File model no longer contains:
- study_id,
- task_id,
- form_field_key
Instead, if the file is associated with workflow - then that is the one way it is connected to the study, and we use this relationship to find files for a study.
A file is never associated with a task_id, as these change when the workflow is reloaded.
The form_field_key must match the irb_doc_code, so when requesting files for a form field, we just look up the irb_doc_code.
2020-05-28 12:27:26 +00:00
|
|
|
|
|
|
|
# Add a document to the study with the correct code.
|
2020-06-11 20:39:00 +00:00
|
|
|
workflow1 = self.create_workflow('docx', study=study)
|
|
|
|
workflow2 = self.create_workflow('empty_workflow', study=study)
|
A major refactor of how we search and store files, as there was a lot of confusing bits in here.
From an API point of view you can do the following (and only the following)
/files?workflow_spec_id=x
* You can find all files associated with a workflow_spec_id, and add a file with a workflow_spec_id
/files?workflow_id=x
* You can find all files associated with a workflow_id, and add a file that is directly associated with the workflow
/files?workflow_id=x&form_field_key=y
* You can find all files associated with a form element on a running workflow, and add a new file.
Note: you can add multiple files to the same form_field_key, IF they have different file names. If the same name, the original file is archived,
and the new file takes its place.
The study endpoints always return a list of the file metadata associated with the study. Removed /studies-files, but there is an
endpoint called
/studies/all - that returns all the studies in the system, and does include their files.
On a deeper level:
The File model no longer contains:
- study_id,
- task_id,
- form_field_key
Instead, if the file is associated with workflow - then that is the one way it is connected to the study, and we use this relationship to find files for a study.
A file is never associated with a task_id, as these change when the workflow is reloaded.
The form_field_key must match the irb_doc_code, so when requesting files for a form field, we just look up the irb_doc_code.
2020-05-28 12:27:26 +00:00
|
|
|
|
|
|
|
# Add files to both workflows.
|
|
|
|
FileService.add_workflow_file(workflow_id=workflow1.id,
|
|
|
|
name="anything.png", content_type="text",
|
|
|
|
binary_data=b'1234', irb_doc_code="UVACompl_PRCAppr" )
|
|
|
|
FileService.add_workflow_file(workflow_id=workflow1.id,
|
|
|
|
name="anything.png", content_type="text",
|
|
|
|
binary_data=b'1234', irb_doc_code="AD_Consent_Model")
|
|
|
|
FileService.add_workflow_file(workflow_id=workflow2.id,
|
|
|
|
name="anything.png", content_type="text",
|
|
|
|
binary_data=b'1234', irb_doc_code="UVACompl_PRCAppr" )
|
|
|
|
|
|
|
|
studies = StudyService().get_all_studies_with_files()
|
2020-06-05 18:08:46 +00:00
|
|
|
self.assertEqual(1, len(studies))
|
|
|
|
self.assertEqual(3, len(studies[0].files))
|
A major refactor of how we search and store files, as there was a lot of confusing bits in here.
From an API point of view you can do the following (and only the following)
/files?workflow_spec_id=x
* You can find all files associated with a workflow_spec_id, and add a file with a workflow_spec_id
/files?workflow_id=x
* You can find all files associated with a workflow_id, and add a file that is directly associated with the workflow
/files?workflow_id=x&form_field_key=y
* You can find all files associated with a form element on a running workflow, and add a new file.
Note: you can add multiple files to the same form_field_key, IF they have different file names. If the same name, the original file is archived,
and the new file takes its place.
The study endpoints always return a list of the file metadata associated with the study. Removed /studies-files, but there is an
endpoint called
/studies/all - that returns all the studies in the system, and does include their files.
On a deeper level:
The File model no longer contains:
- study_id,
- task_id,
- form_field_key
Instead, if the file is associated with workflow - then that is the one way it is connected to the study, and we use this relationship to find files for a study.
A file is never associated with a task_id, as these change when the workflow is reloaded.
The form_field_key must match the irb_doc_code, so when requesting files for a form field, we just look up the irb_doc_code.
2020-05-28 12:27:26 +00:00
|
|
|
|
|
|
|
|
|
|
|
|
2020-05-07 17:57:24 +00:00
|
|
|
|
|
|
|
@patch('crc.services.protocol_builder.ProtocolBuilderService.get_investigators') # mock_docs
|
|
|
|
def test_get_personnel(self, mock_docs):
|
|
|
|
self.load_example_data()
|
|
|
|
|
|
|
|
# mock out the protocol builder
|
|
|
|
docs_response = self.protocol_builder_response('investigators.json')
|
|
|
|
mock_docs.return_value = json.loads(docs_response)
|
|
|
|
|
|
|
|
workflow = self.create_workflow('docx') # The workflow really doesnt matter in this case.
|
|
|
|
investigators = StudyService().get_investigators(workflow.study_id)
|
|
|
|
|
2020-06-05 18:08:46 +00:00
|
|
|
self.assertEqual(9, len(investigators))
|
2020-05-07 17:57:24 +00:00
|
|
|
|
|
|
|
# dhf8r is in the ldap mock data.
|
2020-06-05 18:08:46 +00:00
|
|
|
self.assertEqual("dhf8r", investigators['PI']['user_id'])
|
|
|
|
self.assertEqual("Dan Funk", investigators['PI']['display_name']) # Data from ldap
|
|
|
|
self.assertEqual("Primary Investigator", investigators['PI']['label']) # Data from xls file.
|
|
|
|
self.assertEqual("Always", investigators['PI']['display']) # Data from xls file.
|
2020-05-07 17:57:24 +00:00
|
|
|
|
|
|
|
# asd3v is not in ldap, so an error should be returned.
|
2020-06-05 18:08:46 +00:00
|
|
|
self.assertEqual("asd3v", investigators['DC']['user_id'])
|
|
|
|
self.assertEqual("Unable to locate a user with id asd3v in LDAP", investigators['DC']['error']) # Data from ldap
|
2020-05-07 17:57:24 +00:00
|
|
|
|
|
|
|
# No value is provided for Department Chair
|
|
|
|
self.assertIsNone(investigators['DEPT_CH']['user_id'])
|