Refactor the document details scripts. Now there is one script, it returns data in a consistent format, and has all the details required. The script is located in StudyInfo, with the argument documents. Make note that it returns a dictionary of ALL the documents, with a field to mark which ones are required according to the protocol builder. Others may become required if a workflow determines such, in which case the workflow will enforce this, and the document will have a count > 0, and additional details in a list of files within the document. I modified the XLS file to use lower case variable names, because it disturbed me, and we have to reference them frequently. Removed devious "as_object" variable on get_required_docs, so it behaves like the other methods all the time, and returns a dictionary. All the core business logic for finding the documents list now resides in the StudyService.
Because this changes the endpoint for all existing document details, I've modified all the test and static bpmn files to use the new format.
Shorting up the SponsorsList.xls file makes for slightly faster tests. seems senseless to load 5000 everytime we reset the data.
Tried to test all of this carefully in the test_study_details_documents.py test.
2020-04-29 19:08:11 +00:00
|
|
|
import json
|
|
|
|
from unittest.mock import patch
|
|
|
|
|
|
|
|
from crc import db, session
|
|
|
|
from crc.api.common import ApiError
|
|
|
|
from crc.models.file import FileDataModel, FileModel
|
|
|
|
from crc.models.protocol_builder import ProtocolBuilderRequiredDocumentSchema
|
|
|
|
from crc.models.study import StudyModel
|
|
|
|
from crc.scripts.study_info import StudyInfo
|
|
|
|
from crc.services.file_service import FileService
|
|
|
|
from crc.services.study_service import StudyService
|
|
|
|
from crc.services.workflow_processor import WorkflowProcessor
|
|
|
|
from tests.base_test import BaseTest
|
|
|
|
|
|
|
|
|
|
|
|
class TestStudyDetailsDocumentsScript(BaseTest):
|
|
|
|
test_uid = "dhf8r"
|
|
|
|
test_study_id = 1
|
|
|
|
|
|
|
|
"""
|
|
|
|
1. get a list of all documents related to the study.
|
|
|
|
2. For this study, is this document required accroding to the protocol builder?
|
|
|
|
3. For ALL uploaded documents, what the total number of files that were uploaded? per instance of this document naming
|
|
|
|
convention that we are implementing for the IRB.
|
|
|
|
"""
|
|
|
|
|
2020-05-07 17:57:24 +00:00
|
|
|
@patch('crc.services.protocol_builder.requests.get')
|
|
|
|
def test_validate_returns_error_if_reference_files_do_not_exist(self, mock_get):
|
|
|
|
mock_get.return_value.ok = True
|
|
|
|
mock_get.return_value.text = self.protocol_builder_response('required_docs.json')
|
|
|
|
|
Refactor the document details scripts. Now there is one script, it returns data in a consistent format, and has all the details required. The script is located in StudyInfo, with the argument documents. Make note that it returns a dictionary of ALL the documents, with a field to mark which ones are required according to the protocol builder. Others may become required if a workflow determines such, in which case the workflow will enforce this, and the document will have a count > 0, and additional details in a list of files within the document. I modified the XLS file to use lower case variable names, because it disturbed me, and we have to reference them frequently. Removed devious "as_object" variable on get_required_docs, so it behaves like the other methods all the time, and returns a dictionary. All the core business logic for finding the documents list now resides in the StudyService.
Because this changes the endpoint for all existing document details, I've modified all the test and static bpmn files to use the new format.
Shorting up the SponsorsList.xls file makes for slightly faster tests. seems senseless to load 5000 everytime we reset the data.
Tried to test all of this carefully in the test_study_details_documents.py test.
2020-04-29 19:08:11 +00:00
|
|
|
self.load_example_data()
|
|
|
|
self.create_reference_document()
|
|
|
|
study = session.query(StudyModel).first()
|
|
|
|
workflow_spec_model = self.load_test_spec("two_forms")
|
|
|
|
workflow_model = StudyService._create_workflow_model(study, workflow_spec_model)
|
|
|
|
processor = WorkflowProcessor(workflow_model)
|
|
|
|
task = processor.next_task()
|
|
|
|
|
|
|
|
# Remove the reference file.
|
|
|
|
file_model = db.session.query(FileModel). \
|
|
|
|
filter(FileModel.is_reference == True). \
|
2020-05-07 17:57:24 +00:00
|
|
|
filter(FileModel.name == FileService.DOCUMENT_LIST).first()
|
Refactor the document details scripts. Now there is one script, it returns data in a consistent format, and has all the details required. The script is located in StudyInfo, with the argument documents. Make note that it returns a dictionary of ALL the documents, with a field to mark which ones are required according to the protocol builder. Others may become required if a workflow determines such, in which case the workflow will enforce this, and the document will have a count > 0, and additional details in a list of files within the document. I modified the XLS file to use lower case variable names, because it disturbed me, and we have to reference them frequently. Removed devious "as_object" variable on get_required_docs, so it behaves like the other methods all the time, and returns a dictionary. All the core business logic for finding the documents list now resides in the StudyService.
Because this changes the endpoint for all existing document details, I've modified all the test and static bpmn files to use the new format.
Shorting up the SponsorsList.xls file makes for slightly faster tests. seems senseless to load 5000 everytime we reset the data.
Tried to test all of this carefully in the test_study_details_documents.py test.
2020-04-29 19:08:11 +00:00
|
|
|
if file_model:
|
|
|
|
db.session.query(FileDataModel).filter(FileDataModel.file_model_id == file_model.id).delete()
|
|
|
|
db.session.query(FileModel).filter(FileModel.id == file_model.id).delete()
|
|
|
|
db.session.commit()
|
|
|
|
db.session.flush()
|
|
|
|
|
|
|
|
with self.assertRaises(ApiError):
|
|
|
|
StudyInfo().do_task_validate_only(task, study.id, "documents")
|
|
|
|
|
2020-05-07 17:57:24 +00:00
|
|
|
@patch('crc.services.protocol_builder.requests.get')
|
|
|
|
def test_no_validation_error_when_correct_file_exists(self, mock_get):
|
|
|
|
|
|
|
|
mock_get.return_value.ok = True
|
|
|
|
mock_get.return_value.text = self.protocol_builder_response('required_docs.json')
|
|
|
|
|
Refactor the document details scripts. Now there is one script, it returns data in a consistent format, and has all the details required. The script is located in StudyInfo, with the argument documents. Make note that it returns a dictionary of ALL the documents, with a field to mark which ones are required according to the protocol builder. Others may become required if a workflow determines such, in which case the workflow will enforce this, and the document will have a count > 0, and additional details in a list of files within the document. I modified the XLS file to use lower case variable names, because it disturbed me, and we have to reference them frequently. Removed devious "as_object" variable on get_required_docs, so it behaves like the other methods all the time, and returns a dictionary. All the core business logic for finding the documents list now resides in the StudyService.
Because this changes the endpoint for all existing document details, I've modified all the test and static bpmn files to use the new format.
Shorting up the SponsorsList.xls file makes for slightly faster tests. seems senseless to load 5000 everytime we reset the data.
Tried to test all of this carefully in the test_study_details_documents.py test.
2020-04-29 19:08:11 +00:00
|
|
|
self.load_example_data()
|
|
|
|
self.create_reference_document()
|
|
|
|
study = session.query(StudyModel).first()
|
|
|
|
workflow_spec_model = self.load_test_spec("two_forms")
|
|
|
|
workflow_model = StudyService._create_workflow_model(study, workflow_spec_model)
|
|
|
|
processor = WorkflowProcessor(workflow_model)
|
|
|
|
task = processor.next_task()
|
|
|
|
StudyInfo().do_task_validate_only(task, study.id, "documents")
|
|
|
|
|
|
|
|
def test_load_lookup_data(self):
|
|
|
|
self.create_reference_document()
|
2020-05-07 17:57:24 +00:00
|
|
|
dict = FileService.get_reference_data(FileService.DOCUMENT_LIST, 'code', ['id'])
|
Refactor the document details scripts. Now there is one script, it returns data in a consistent format, and has all the details required. The script is located in StudyInfo, with the argument documents. Make note that it returns a dictionary of ALL the documents, with a field to mark which ones are required according to the protocol builder. Others may become required if a workflow determines such, in which case the workflow will enforce this, and the document will have a count > 0, and additional details in a list of files within the document. I modified the XLS file to use lower case variable names, because it disturbed me, and we have to reference them frequently. Removed devious "as_object" variable on get_required_docs, so it behaves like the other methods all the time, and returns a dictionary. All the core business logic for finding the documents list now resides in the StudyService.
Because this changes the endpoint for all existing document details, I've modified all the test and static bpmn files to use the new format.
Shorting up the SponsorsList.xls file makes for slightly faster tests. seems senseless to load 5000 everytime we reset the data.
Tried to test all of this carefully in the test_study_details_documents.py test.
2020-04-29 19:08:11 +00:00
|
|
|
self.assertIsNotNone(dict)
|
|
|
|
|
|
|
|
def get_required_docs(self):
|
|
|
|
string_data = self.protocol_builder_response('required_docs.json')
|
|
|
|
return ProtocolBuilderRequiredDocumentSchema(many=True).loads(string_data)
|
|
|
|
|