2020-07-07 17:16:33 -04:00
|
|
|
from copy import copy
|
2020-05-04 10:57:09 -04:00
|
|
|
from datetime import datetime
|
2022-03-18 15:27:45 -04:00
|
|
|
from dateutil import parser
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 08:00:16 -04:00
|
|
|
from typing import List
|
|
|
|
|
2020-05-11 17:04:05 -04:00
|
|
|
import requests
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 08:00:16 -04:00
|
|
|
from SpiffWorkflow import WorkflowException
|
2021-05-14 15:52:25 -04:00
|
|
|
from SpiffWorkflow.bpmn.PythonScriptEngine import Box
|
2020-05-30 15:37:04 -04:00
|
|
|
from SpiffWorkflow.exceptions import WorkflowTaskExecException
|
2022-02-24 14:25:42 -05:00
|
|
|
from SpiffWorkflow.util.metrics import timeit, firsttime, sincetime, LOG
|
|
|
|
from flask import g
|
2020-05-07 13:57:24 -04:00
|
|
|
from ldap3.core.exceptions import LDAPSocketOpenError
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 08:00:16 -04:00
|
|
|
|
2020-05-07 13:57:24 -04:00
|
|
|
from crc import db, session, app
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 08:00:16 -04:00
|
|
|
from crc.api.common import ApiError
|
2022-02-17 14:07:33 -05:00
|
|
|
from crc.models.data_store import DataStoreModel
|
2021-03-09 13:31:26 -05:00
|
|
|
from crc.models.email import EmailModel
|
2022-04-19 14:56:34 -04:00
|
|
|
from crc.models.file import FileModel, File, FileSchema
|
2020-06-02 18:17:00 -04:00
|
|
|
from crc.models.ldap import LdapSchema
|
2021-07-20 11:44:11 -04:00
|
|
|
|
2021-12-06 14:37:42 -05:00
|
|
|
from crc.models.protocol_builder import ProtocolBuilderCreatorStudy
|
2020-08-17 14:56:00 -04:00
|
|
|
from crc.models.study import StudyModel, Study, StudyStatus, Category, WorkflowMetadata, StudyEventType, StudyEvent, \
|
2022-03-16 12:49:35 -04:00
|
|
|
StudyAssociated, ProgressStatus, CategoryMetadata
|
2021-12-06 14:37:42 -05:00
|
|
|
from crc.models.task_event import TaskEventModel
|
2021-11-09 10:42:47 -05:00
|
|
|
from crc.models.task_log import TaskLogModel
|
2022-02-04 14:50:31 -05:00
|
|
|
from crc.models.workflow import WorkflowSpecCategory, WorkflowModel, WorkflowSpecInfo, WorkflowState, \
|
2022-01-19 16:12:54 -05:00
|
|
|
WorkflowStatus
|
2021-07-06 13:10:20 -04:00
|
|
|
from crc.services.document_service import DocumentService
|
2020-05-07 13:57:24 -04:00
|
|
|
from crc.services.ldap_service import LdapService
|
2021-07-06 13:10:20 -04:00
|
|
|
from crc.services.lookup_service import LookupService
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 08:00:16 -04:00
|
|
|
from crc.services.protocol_builder import ProtocolBuilderService
|
2022-02-02 12:59:56 -05:00
|
|
|
from crc.services.user_file_service import UserFileService
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 08:00:16 -04:00
|
|
|
from crc.services.workflow_processor import WorkflowProcessor
|
|
|
|
|
|
|
|
|
|
|
|
class StudyService(object):
|
|
|
|
"""Provides common tools for working with a Study"""
|
2021-07-06 14:40:20 -04:00
|
|
|
INVESTIGATOR_LIST = "investigators.xlsx" # A reference document containing details about what investigators to show, and when.
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 08:00:16 -04:00
|
|
|
|
2022-03-30 10:29:53 -04:00
|
|
|
# The review types 2, 3, 21 correspond to review type names
|
|
|
|
# `Full Committee`, `Expedited`, and `Review by Non-UVA IRB`
|
2022-03-18 09:59:10 -04:00
|
|
|
# These are considered to be the valid review types that can be shown to users.
|
2022-03-30 10:29:53 -04:00
|
|
|
VALID_REVIEW_TYPES = [2, 3, 21]
|
2022-03-18 15:27:45 -04:00
|
|
|
PB_MIN_DATE = parser.parse(app.config['PB_MIN_DATE'])
|
2021-07-09 10:37:25 -04:00
|
|
|
|
2022-02-09 08:50:00 -05:00
|
|
|
def get_studies_for_user(self, user, categories, include_invalid=False):
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 08:00:16 -04:00
|
|
|
"""Returns a list of all studies for the given user."""
|
2022-05-06 08:57:01 -04:00
|
|
|
|
2021-08-10 16:16:08 -04:00
|
|
|
associated = session.query(StudyAssociated).filter_by(uid=user.uid, access=True).all()
|
2021-02-25 11:08:06 -05:00
|
|
|
associated_studies = [x.study_id for x in associated]
|
2021-08-10 16:16:08 -04:00
|
|
|
db_studies = session.query(StudyModel).filter((StudyModel.user_uid == user.uid) |
|
2021-02-25 11:08:06 -05:00
|
|
|
(StudyModel.id.in_(associated_studies))).all()
|
|
|
|
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 08:00:16 -04:00
|
|
|
studies = []
|
|
|
|
for study_model in db_studies:
|
2022-03-18 09:59:10 -04:00
|
|
|
if include_invalid or study_model.review_type in self.VALID_REVIEW_TYPES:
|
2022-03-18 16:22:33 -04:00
|
|
|
studies.append(StudyService.get_study(study_model.id, categories, study_model=study_model,
|
|
|
|
process_categories=False))
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 08:00:16 -04:00
|
|
|
return studies
|
|
|
|
|
2020-05-22 09:46:03 -06:00
|
|
|
@staticmethod
|
A major refactor of how we search and store files, as there was a lot of confusing bits in here.
From an API point of view you can do the following (and only the following)
/files?workflow_spec_id=x
* You can find all files associated with a workflow_spec_id, and add a file with a workflow_spec_id
/files?workflow_id=x
* You can find all files associated with a workflow_id, and add a file that is directly associated with the workflow
/files?workflow_id=x&form_field_key=y
* You can find all files associated with a form element on a running workflow, and add a new file.
Note: you can add multiple files to the same form_field_key, IF they have different file names. If the same name, the original file is archived,
and the new file takes its place.
The study endpoints always return a list of the file metadata associated with the study. Removed /studies-files, but there is an
endpoint called
/studies/all - that returns all the studies in the system, and does include their files.
On a deeper level:
The File model no longer contains:
- study_id,
- task_id,
- form_field_key
Instead, if the file is associated with workflow - then that is the one way it is connected to the study, and we use this relationship to find files for a study.
A file is never associated with a task_id, as these change when the workflow is reloaded.
The form_field_key must match the irb_doc_code, so when requesting files for a form field, we just look up the irb_doc_code.
2020-05-28 08:27:26 -04:00
|
|
|
def get_all_studies_with_files():
|
2020-05-22 09:46:03 -06:00
|
|
|
"""Returns a list of all studies"""
|
|
|
|
db_studies = session.query(StudyModel).all()
|
A major refactor of how we search and store files, as there was a lot of confusing bits in here.
From an API point of view you can do the following (and only the following)
/files?workflow_spec_id=x
* You can find all files associated with a workflow_spec_id, and add a file with a workflow_spec_id
/files?workflow_id=x
* You can find all files associated with a workflow_id, and add a file that is directly associated with the workflow
/files?workflow_id=x&form_field_key=y
* You can find all files associated with a form element on a running workflow, and add a new file.
Note: you can add multiple files to the same form_field_key, IF they have different file names. If the same name, the original file is archived,
and the new file takes its place.
The study endpoints always return a list of the file metadata associated with the study. Removed /studies-files, but there is an
endpoint called
/studies/all - that returns all the studies in the system, and does include their files.
On a deeper level:
The File model no longer contains:
- study_id,
- task_id,
- form_field_key
Instead, if the file is associated with workflow - then that is the one way it is connected to the study, and we use this relationship to find files for a study.
A file is never associated with a task_id, as these change when the workflow is reloaded.
The form_field_key must match the irb_doc_code, so when requesting files for a form field, we just look up the irb_doc_code.
2020-05-28 08:27:26 -04:00
|
|
|
studies = []
|
|
|
|
for s in db_studies:
|
|
|
|
study = Study.from_model(s)
|
2022-02-02 12:59:56 -05:00
|
|
|
study.files = UserFileService.get_files_for_study(study.id)
|
A major refactor of how we search and store files, as there was a lot of confusing bits in here.
From an API point of view you can do the following (and only the following)
/files?workflow_spec_id=x
* You can find all files associated with a workflow_spec_id, and add a file with a workflow_spec_id
/files?workflow_id=x
* You can find all files associated with a workflow_id, and add a file that is directly associated with the workflow
/files?workflow_id=x&form_field_key=y
* You can find all files associated with a form element on a running workflow, and add a new file.
Note: you can add multiple files to the same form_field_key, IF they have different file names. If the same name, the original file is archived,
and the new file takes its place.
The study endpoints always return a list of the file metadata associated with the study. Removed /studies-files, but there is an
endpoint called
/studies/all - that returns all the studies in the system, and does include their files.
On a deeper level:
The File model no longer contains:
- study_id,
- task_id,
- form_field_key
Instead, if the file is associated with workflow - then that is the one way it is connected to the study, and we use this relationship to find files for a study.
A file is never associated with a task_id, as these change when the workflow is reloaded.
The form_field_key must match the irb_doc_code, so when requesting files for a form field, we just look up the irb_doc_code.
2020-05-28 08:27:26 -04:00
|
|
|
studies.append(study)
|
|
|
|
return studies
|
2020-05-22 09:46:03 -06:00
|
|
|
|
2022-05-11 16:57:39 -04:00
|
|
|
@staticmethod
|
|
|
|
def get_study_warnings(workflow_metas, status):
|
|
|
|
|
|
|
|
# Grab warnings generated from the master workflow for debugging
|
|
|
|
warnings = []
|
|
|
|
unused_statuses = status.copy() # A list of all the statuses that are not used.
|
|
|
|
for wfm in workflow_metas:
|
|
|
|
unused_statuses.pop(wfm.workflow_spec_id, None)
|
|
|
|
# do we have a status for you
|
|
|
|
if wfm.workflow_spec_id not in status.keys():
|
|
|
|
warnings.append(ApiError("missing_status",
|
|
|
|
"No status information provided about workflow %s" % wfm.workflow_spec_id))
|
|
|
|
continue
|
|
|
|
if not isinstance(status[wfm.workflow_spec_id], dict):
|
|
|
|
warnings.append(ApiError(code='invalid_status',
|
|
|
|
message=f'Status must be a dictionary with "status" and "message" keys. '
|
|
|
|
f'Name is {wfm.workflow_spec_id}. Status is {status[wfm.workflow_spec_id]}'))
|
|
|
|
continue
|
|
|
|
if 'status' not in status[wfm.workflow_spec_id].keys():
|
|
|
|
warnings.append(ApiError("missing_status_key",
|
|
|
|
"Workflow '%s' is present in master workflow, but doesn't have a status" % wfm.workflow_spec_id))
|
|
|
|
continue
|
|
|
|
if not WorkflowState.has_value(status[wfm.workflow_spec_id]['status']):
|
|
|
|
warnings.append(ApiError("invalid_state",
|
|
|
|
"Workflow '%s' can not be set to '%s', should be one of %s" % (
|
|
|
|
wfm.workflow_spec_id, status[wfm.workflow_spec_id]['status'],
|
|
|
|
",".join(WorkflowState.list())
|
|
|
|
)))
|
|
|
|
continue
|
|
|
|
|
|
|
|
for status in unused_statuses:
|
|
|
|
if isinstance(unused_statuses[status], dict) and 'status' in unused_statuses[status]:
|
|
|
|
warnings.append(ApiError("unmatched_status", "The master workflow provided a status for '%s' a "
|
|
|
|
"workflow that doesn't seem to exist." %
|
|
|
|
status))
|
|
|
|
|
|
|
|
return warnings
|
2022-03-17 17:20:42 -04:00
|
|
|
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 08:00:16 -04:00
|
|
|
@staticmethod
|
2022-03-18 12:59:31 -04:00
|
|
|
@timeit
|
2022-02-09 08:50:00 -05:00
|
|
|
def get_study(study_id, categories: List[WorkflowSpecCategory], study_model: StudyModel = None,
|
2022-03-18 16:55:38 -04:00
|
|
|
master_workflow_results=None, process_categories=False):
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 08:00:16 -04:00
|
|
|
"""Returns a study model that contains all the workflows organized by category.
|
2022-02-09 08:50:00 -05:00
|
|
|
Pass in the results of the master workflow spec, and the status of other workflows will be updated."""
|
2022-03-18 12:59:31 -04:00
|
|
|
last_time = firsttime()
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 08:00:16 -04:00
|
|
|
if not study_model:
|
|
|
|
study_model = session.query(StudyModel).filter_by(id=study_id).first()
|
|
|
|
study = Study.from_model(study_model)
|
2022-03-18 12:59:31 -04:00
|
|
|
last_time = sincetime("from model", last_time)
|
2021-02-10 11:58:19 -05:00
|
|
|
study.create_user_display = LdapService.user_info(study.user_uid).display_name
|
2022-03-18 12:59:31 -04:00
|
|
|
last_time = sincetime("user", last_time)
|
2021-08-10 16:16:08 -04:00
|
|
|
last_event: TaskEventModel = session.query(TaskEventModel) \
|
|
|
|
.filter_by(study_id=study_id, action='COMPLETE') \
|
2021-02-10 11:58:19 -05:00
|
|
|
.order_by(TaskEventModel.date.desc()).first()
|
2021-02-16 11:10:40 -05:00
|
|
|
if last_event is None:
|
|
|
|
study.last_activity_user = 'Not Started'
|
|
|
|
study.last_activity_date = ""
|
|
|
|
else:
|
|
|
|
study.last_activity_user = LdapService.user_info(last_event.user_uid).display_name
|
|
|
|
study.last_activity_date = last_event.date
|
2022-03-18 12:59:31 -04:00
|
|
|
last_time = sincetime("task_events", last_time)
|
2022-02-09 08:50:00 -05:00
|
|
|
study.categories = categories
|
2022-02-02 12:59:56 -05:00
|
|
|
files = UserFileService.get_files_for_study(study.id)
|
2022-04-20 11:16:07 -04:00
|
|
|
files = (File.from_file_model(model, DocumentService.get_dictionary()) for model in files)
|
2020-05-31 21:15:40 -04:00
|
|
|
study.files = list(files)
|
2022-03-18 12:59:31 -04:00
|
|
|
last_time = sincetime("files", last_time)
|
2022-05-31 16:53:15 -04:00
|
|
|
if process_categories and master_workflow_results is not None:
|
2022-03-18 16:03:50 -04:00
|
|
|
if study.status != StudyStatus.abandoned:
|
2022-05-16 12:58:40 -04:00
|
|
|
workflow_metas = []
|
2022-03-18 16:03:50 -04:00
|
|
|
for category in study.categories:
|
2022-05-16 12:58:40 -04:00
|
|
|
cat_workflow_metas = StudyService._get_workflow_metas(study_id, category)
|
|
|
|
workflow_metas.extend(cat_workflow_metas)
|
2022-03-18 16:03:50 -04:00
|
|
|
category_meta = []
|
|
|
|
if master_workflow_results:
|
|
|
|
category_meta = StudyService._update_status_of_category_meta(master_workflow_results, category)
|
2022-05-16 12:58:40 -04:00
|
|
|
category.workflows = cat_workflow_metas
|
2022-03-18 16:03:50 -04:00
|
|
|
category.meta = category_meta
|
2022-05-16 12:58:40 -04:00
|
|
|
study.warnings = StudyService.get_study_warnings(workflow_metas, master_workflow_results)
|
|
|
|
|
2022-03-18 16:03:50 -04:00
|
|
|
last_time = sincetime("categories", last_time)
|
2022-03-18 11:03:06 -04:00
|
|
|
|
|
|
|
if study.primary_investigator is None:
|
|
|
|
associates = StudyService().get_study_associates(study.id)
|
|
|
|
for associate in associates:
|
|
|
|
if associate.role == "Primary Investigator":
|
|
|
|
study.primary_investigator = associate.ldap_info.display_name
|
2022-03-18 15:27:45 -04:00
|
|
|
|
2022-03-17 17:20:42 -04:00
|
|
|
# Calculate study progress and return it as a integer out of a hundred
|
2022-03-18 14:42:03 -04:00
|
|
|
all_workflows = db.session.query(WorkflowModel).\
|
|
|
|
filter(WorkflowModel.study_id == study.id).\
|
|
|
|
count()
|
|
|
|
complete_workflows = db.session.query(WorkflowModel).\
|
|
|
|
filter(WorkflowModel.study_id == study.id).\
|
|
|
|
filter(WorkflowModel.status == WorkflowStatus.complete).\
|
|
|
|
count()
|
|
|
|
if all_workflows > 0:
|
|
|
|
study.progress = int((complete_workflows/all_workflows)*100)
|
2022-03-18 15:27:45 -04:00
|
|
|
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 08:00:16 -04:00
|
|
|
return study
|
|
|
|
|
2022-02-09 08:50:00 -05:00
|
|
|
@staticmethod
|
|
|
|
def _get_workflow_metas(study_id, category):
|
|
|
|
# Add in the Workflows for each category
|
|
|
|
workflow_metas = []
|
|
|
|
for spec in category.specs:
|
|
|
|
workflow_models = db.session.query(WorkflowModel).\
|
|
|
|
filter(WorkflowModel.study_id == study_id).\
|
|
|
|
filter(WorkflowModel.workflow_spec_id == spec.id).\
|
|
|
|
all()
|
|
|
|
for workflow in workflow_models:
|
|
|
|
workflow_metas.append(WorkflowMetadata.from_workflow(workflow, spec))
|
|
|
|
return workflow_metas
|
|
|
|
|
2022-03-15 13:21:58 -04:00
|
|
|
|
2021-02-24 12:05:06 -05:00
|
|
|
@staticmethod
|
2021-08-10 16:16:08 -04:00
|
|
|
def get_study_associate(study_id=None, uid=None):
|
2021-02-24 12:05:06 -05:00
|
|
|
"""
|
2021-08-10 16:16:08 -04:00
|
|
|
gets details on how one uid is related to a study, returns a StudyAssociated model
|
2021-02-24 12:05:06 -05:00
|
|
|
"""
|
|
|
|
study = db.session.query(StudyModel).filter(StudyModel.id == study_id).first()
|
|
|
|
|
|
|
|
if study is None:
|
|
|
|
raise ApiError('study_not_found', 'No study found with id = %d' % study_id)
|
|
|
|
|
|
|
|
if uid is None:
|
2021-08-10 16:16:08 -04:00
|
|
|
raise ApiError('uid not specified', 'A valid uva uid is required for this function')
|
2021-02-24 12:05:06 -05:00
|
|
|
|
|
|
|
if uid == study.user_uid:
|
2021-08-10 16:16:08 -04:00
|
|
|
return StudyAssociated(uid=uid, role='owner', send_email=True, access=True)
|
2021-02-24 12:05:06 -05:00
|
|
|
|
2021-08-10 16:16:08 -04:00
|
|
|
people = db.session.query(StudyAssociated).filter((StudyAssociated.study_id == study_id) &
|
|
|
|
(StudyAssociated.uid == uid)).first()
|
|
|
|
if people:
|
|
|
|
return people
|
|
|
|
else:
|
|
|
|
raise ApiError('uid_not_associated_with_study', "user id %s was not associated with study number %d" % (uid,
|
2021-08-12 14:03:44 -04:00
|
|
|
study_id))
|
2021-02-24 12:05:06 -05:00
|
|
|
|
|
|
|
@staticmethod
|
|
|
|
def get_study_associates(study_id):
|
|
|
|
"""
|
|
|
|
gets all associated people for a study from the database
|
|
|
|
"""
|
|
|
|
study = db.session.query(StudyModel).filter(StudyModel.id == study_id).first()
|
|
|
|
|
|
|
|
if study is None:
|
2021-08-10 16:16:08 -04:00
|
|
|
raise ApiError('study_not_found', 'No study found with id = %d' % study_id)
|
2021-02-24 12:05:06 -05:00
|
|
|
|
2021-08-10 16:16:08 -04:00
|
|
|
people = db.session.query(StudyAssociated).filter(StudyAssociated.study_id == study_id).all()
|
2021-09-22 13:16:25 -04:00
|
|
|
ldap_info = LdapService.user_info(study.user_uid)
|
|
|
|
owner = StudyAssociated(uid=study.user_uid, role='owner', send_email=True, access=True,
|
|
|
|
ldap_info=ldap_info)
|
2021-08-10 16:16:08 -04:00
|
|
|
people.append(owner)
|
|
|
|
return people
|
2021-02-24 12:05:06 -05:00
|
|
|
|
|
|
|
@staticmethod
|
2021-03-22 17:30:49 -04:00
|
|
|
def update_study_associates(study_id, associates):
|
2021-02-24 12:05:06 -05:00
|
|
|
"""
|
|
|
|
updates the list of associates in the database for a study_id and a list
|
|
|
|
of dicts that contains associates
|
|
|
|
"""
|
|
|
|
if study_id is None:
|
|
|
|
raise ApiError('study_id not specified', "This function requires the study_id parameter")
|
|
|
|
|
|
|
|
for person in associates:
|
2021-08-10 16:16:08 -04:00
|
|
|
if not LdapService.user_exists(person.get('uid', 'impossible_uid')):
|
|
|
|
if person.get('uid', 'impossible_uid') == 'impossible_uid':
|
|
|
|
raise ApiError('associate with no uid', 'One of the associates passed as a parameter doesnt have '
|
2021-02-24 12:05:06 -05:00
|
|
|
'a uid specified')
|
2021-08-10 16:16:08 -04:00
|
|
|
raise ApiError('trying_to_grant_access_to_user_not_found_in_ldap', "You are trying to grant access to "
|
|
|
|
"%s, but that user was not found in "
|
|
|
|
"ldap "
|
|
|
|
"- please check to ensure it is a "
|
|
|
|
"valid uva uid" % person.get('uid'))
|
2021-02-24 12:05:06 -05:00
|
|
|
|
|
|
|
study = db.session.query(StudyModel).filter(StudyModel.id == study_id).first()
|
|
|
|
if study is None:
|
2021-08-10 16:16:08 -04:00
|
|
|
raise ApiError('study_id not found', "A study with id# %d was not found" % study_id)
|
2021-02-24 12:05:06 -05:00
|
|
|
|
|
|
|
db.session.query(StudyAssociated).filter(StudyAssociated.study_id == study_id).delete()
|
|
|
|
for person in associates:
|
|
|
|
newAssociate = StudyAssociated()
|
2021-02-24 12:55:23 -05:00
|
|
|
newAssociate.study_id = study_id
|
2021-02-24 12:05:06 -05:00
|
|
|
newAssociate.uid = person['uid']
|
|
|
|
newAssociate.role = person.get('role', None)
|
|
|
|
newAssociate.send_email = person.get('send_email', False)
|
2021-08-10 16:16:08 -04:00
|
|
|
newAssociate.access = person.get('access', False)
|
2021-02-24 12:55:23 -05:00
|
|
|
session.add(newAssociate)
|
|
|
|
session.commit()
|
2021-02-24 12:05:06 -05:00
|
|
|
|
|
|
|
@staticmethod
|
2021-08-10 16:16:08 -04:00
|
|
|
def update_study_associate(study_id=None, uid=None, role="", send_email=False, access=False):
|
2021-02-24 12:05:06 -05:00
|
|
|
if study_id is None:
|
|
|
|
raise ApiError('study_id not specified', "This function requires the study_id parameter")
|
|
|
|
if uid is None:
|
|
|
|
raise ApiError('uid not specified', "This function requires a uva uid parameter")
|
|
|
|
|
|
|
|
if not LdapService.user_exists(uid):
|
2021-08-10 16:16:08 -04:00
|
|
|
raise ApiError('trying_to_grant_access_to_user_not_found_in_ldap', "You are trying to grant access to "
|
|
|
|
"%s but they were not found in ldap "
|
|
|
|
"- please check to ensure it is a "
|
|
|
|
"valid uva uid" % uid)
|
2021-02-24 12:05:06 -05:00
|
|
|
study = db.session.query(StudyModel).filter(StudyModel.id == study_id).first()
|
|
|
|
if study is None:
|
2021-08-10 16:16:08 -04:00
|
|
|
raise ApiError('study_id not found', "A study with id# %d was not found" % study_id)
|
2022-04-04 16:01:48 -04:00
|
|
|
|
|
|
|
assoc = db.session.query(StudyAssociated).filter((StudyAssociated.study_id == study_id) &
|
|
|
|
(StudyAssociated.uid == uid) &
|
|
|
|
(StudyAssociated.role == role)).first()
|
|
|
|
if not assoc:
|
|
|
|
assoc = StudyAssociated()
|
|
|
|
|
|
|
|
assoc.study_id = study_id
|
|
|
|
assoc.uid = uid
|
|
|
|
assoc.role = role
|
|
|
|
assoc.send_email = send_email
|
|
|
|
assoc.access = access
|
|
|
|
session.add(assoc)
|
2021-02-24 12:55:23 -05:00
|
|
|
session.commit()
|
|
|
|
return True
|
2021-02-24 12:05:06 -05:00
|
|
|
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 08:00:16 -04:00
|
|
|
@staticmethod
|
|
|
|
def delete_study(study_id):
|
2020-04-06 13:08:17 -04:00
|
|
|
session.query(TaskEventModel).filter_by(study_id=study_id).delete()
|
2021-11-09 10:42:47 -05:00
|
|
|
session.query(TaskLogModel).filter_by(study_id=study_id).delete()
|
2021-03-09 13:31:26 -05:00
|
|
|
session.query(StudyAssociated).filter_by(study_id=study_id).delete()
|
|
|
|
session.query(EmailModel).filter_by(study_id=study_id).delete()
|
|
|
|
session.query(StudyEvent).filter_by(study_id=study_id).delete()
|
2022-03-12 13:35:04 -05:00
|
|
|
session.query(DataStoreModel).filter_by(study_id=study_id).delete()
|
2020-04-29 16:07:39 -04:00
|
|
|
for workflow in session.query(WorkflowModel).filter_by(study_id=study_id):
|
2020-08-10 13:51:05 -04:00
|
|
|
StudyService.delete_workflow(workflow.id)
|
2020-08-12 10:13:23 -04:00
|
|
|
study = session.query(StudyModel).filter_by(id=study_id).first()
|
|
|
|
session.delete(study)
|
2020-04-08 13:28:43 -04:00
|
|
|
session.commit()
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 08:00:16 -04:00
|
|
|
|
2020-04-29 16:07:39 -04:00
|
|
|
@staticmethod
|
2020-08-10 07:42:56 -06:00
|
|
|
def delete_workflow(workflow_id):
|
|
|
|
workflow = session.query(WorkflowModel).get(workflow_id)
|
2020-08-10 14:56:33 -04:00
|
|
|
if not workflow:
|
|
|
|
return
|
2020-08-10 07:42:56 -06:00
|
|
|
|
2020-08-10 14:56:33 -04:00
|
|
|
session.query(TaskEventModel).filter_by(workflow_id=workflow.id).delete()
|
2022-04-19 14:56:34 -04:00
|
|
|
files = session.query(FileModel).filter_by(workflow_id=workflow_id).all()
|
2022-02-02 12:59:56 -05:00
|
|
|
for file in files:
|
2022-04-19 14:56:34 -04:00
|
|
|
session.query(DataStoreModel).filter(DataStoreModel.file_id == file.id).delete()
|
2022-02-02 12:59:56 -05:00
|
|
|
session.delete(file)
|
2020-08-10 07:42:56 -06:00
|
|
|
|
|
|
|
session.delete(workflow)
|
2020-08-04 15:50:29 -04:00
|
|
|
session.commit()
|
2020-04-29 16:07:39 -04:00
|
|
|
|
2022-02-24 14:25:42 -05:00
|
|
|
@classmethod
|
|
|
|
def get_documents_status(cls, study_id, force=False):
|
|
|
|
"""Returns a list of documents related to the study, and any file information
|
|
|
|
that is available. This is a fairly expensive operation. So we cache the results
|
|
|
|
in Flask's g. Each fresh api request will get an up to date list, but we won't
|
|
|
|
re-create it sevearl times."""
|
|
|
|
if 'doc_statuses' not in g:
|
|
|
|
g.doc_statuses = {}
|
|
|
|
if study_id not in g.doc_statuses or force:
|
|
|
|
g.doc_statuses[study_id] = StudyService.__get_documents_status(study_id)
|
|
|
|
return g.doc_statuses[study_id]
|
|
|
|
|
2022-03-17 17:20:42 -04:00
|
|
|
|
2020-04-23 19:25:01 -04:00
|
|
|
@staticmethod
|
2022-02-24 14:25:42 -05:00
|
|
|
def __get_documents_status(study_id):
|
2020-05-06 11:25:50 -04:00
|
|
|
"""Returns a list of documents related to the study, and any file information
|
Refactor the document details scripts. Now there is one script, it returns data in a consistent format, and has all the details required. The script is located in StudyInfo, with the argument documents. Make note that it returns a dictionary of ALL the documents, with a field to mark which ones are required according to the protocol builder. Others may become required if a workflow determines such, in which case the workflow will enforce this, and the document will have a count > 0, and additional details in a list of files within the document. I modified the XLS file to use lower case variable names, because it disturbed me, and we have to reference them frequently. Removed devious "as_object" variable on get_required_docs, so it behaves like the other methods all the time, and returns a dictionary. All the core business logic for finding the documents list now resides in the StudyService.
Because this changes the endpoint for all existing document details, I've modified all the test and static bpmn files to use the new format.
Shorting up the SponsorsList.xls file makes for slightly faster tests. seems senseless to load 5000 everytime we reset the data.
Tried to test all of this carefully in the test_study_details_documents.py test.
2020-04-29 15:08:11 -04:00
|
|
|
that is available.."""
|
2020-04-23 19:25:01 -04:00
|
|
|
|
2020-05-22 14:37:49 -04:00
|
|
|
# Get PB required docs, if Protocol Builder Service is enabled.
|
2020-06-15 11:27:28 -04:00
|
|
|
if ProtocolBuilderService.is_enabled() and study_id is not None:
|
2020-05-22 14:37:49 -04:00
|
|
|
try:
|
|
|
|
pb_docs = ProtocolBuilderService.get_required_docs(study_id=study_id)
|
|
|
|
except requests.exceptions.ConnectionError as ce:
|
2020-07-02 16:10:33 -06:00
|
|
|
app.logger.error(f'Failed to connect to the Protocol Builder - {str(ce)}', exc_info=True)
|
2020-05-22 14:37:49 -04:00
|
|
|
pb_docs = []
|
|
|
|
else:
|
2020-05-11 17:04:05 -04:00
|
|
|
pb_docs = []
|
2020-05-22 14:37:49 -04:00
|
|
|
# Loop through all known document types, get the counts for those files,
|
|
|
|
# and use pb_docs to mark those as required.
|
2021-07-06 13:10:20 -04:00
|
|
|
doc_dictionary = DocumentService.get_dictionary()
|
2022-02-24 14:25:42 -05:00
|
|
|
|
Refactor the document details scripts. Now there is one script, it returns data in a consistent format, and has all the details required. The script is located in StudyInfo, with the argument documents. Make note that it returns a dictionary of ALL the documents, with a field to mark which ones are required according to the protocol builder. Others may become required if a workflow determines such, in which case the workflow will enforce this, and the document will have a count > 0, and additional details in a list of files within the document. I modified the XLS file to use lower case variable names, because it disturbed me, and we have to reference them frequently. Removed devious "as_object" variable on get_required_docs, so it behaves like the other methods all the time, and returns a dictionary. All the core business logic for finding the documents list now resides in the StudyService.
Because this changes the endpoint for all existing document details, I've modified all the test and static bpmn files to use the new format.
Shorting up the SponsorsList.xls file makes for slightly faster tests. seems senseless to load 5000 everytime we reset the data.
Tried to test all of this carefully in the test_study_details_documents.py test.
2020-04-29 15:08:11 -04:00
|
|
|
documents = {}
|
2022-02-24 14:25:42 -05:00
|
|
|
study_files = UserFileService.get_files_for_study(study_id=study_id)
|
|
|
|
|
Refactor the document details scripts. Now there is one script, it returns data in a consistent format, and has all the details required. The script is located in StudyInfo, with the argument documents. Make note that it returns a dictionary of ALL the documents, with a field to mark which ones are required according to the protocol builder. Others may become required if a workflow determines such, in which case the workflow will enforce this, and the document will have a count > 0, and additional details in a list of files within the document. I modified the XLS file to use lower case variable names, because it disturbed me, and we have to reference them frequently. Removed devious "as_object" variable on get_required_docs, so it behaves like the other methods all the time, and returns a dictionary. All the core business logic for finding the documents list now resides in the StudyService.
Because this changes the endpoint for all existing document details, I've modified all the test and static bpmn files to use the new format.
Shorting up the SponsorsList.xls file makes for slightly faster tests. seems senseless to load 5000 everytime we reset the data.
Tried to test all of this carefully in the test_study_details_documents.py test.
2020-04-29 15:08:11 -04:00
|
|
|
for code, doc in doc_dictionary.items():
|
2020-04-23 23:32:20 -04:00
|
|
|
|
2021-07-06 13:10:20 -04:00
|
|
|
doc['required'] = False
|
2021-11-05 09:59:14 -04:00
|
|
|
if ProtocolBuilderService.is_enabled() and doc['id'] != '':
|
2022-02-02 12:59:56 -05:00
|
|
|
pb_data = next(
|
|
|
|
(item for item in pb_docs['AUXDOCS'] if int(item['SS_AUXILIARY_DOC_TYPE_ID']) == int(doc['id'])),
|
|
|
|
None)
|
2020-05-22 14:37:49 -04:00
|
|
|
if pb_data:
|
|
|
|
doc['required'] = True
|
|
|
|
|
2020-04-23 19:25:01 -04:00
|
|
|
doc['study_id'] = study_id
|
|
|
|
doc['code'] = code
|
2020-04-23 23:32:20 -04:00
|
|
|
|
2022-02-24 14:25:42 -05:00
|
|
|
|
Refactor the document details scripts. Now there is one script, it returns data in a consistent format, and has all the details required. The script is located in StudyInfo, with the argument documents. Make note that it returns a dictionary of ALL the documents, with a field to mark which ones are required according to the protocol builder. Others may become required if a workflow determines such, in which case the workflow will enforce this, and the document will have a count > 0, and additional details in a list of files within the document. I modified the XLS file to use lower case variable names, because it disturbed me, and we have to reference them frequently. Removed devious "as_object" variable on get_required_docs, so it behaves like the other methods all the time, and returns a dictionary. All the core business logic for finding the documents list now resides in the StudyService.
Because this changes the endpoint for all existing document details, I've modified all the test and static bpmn files to use the new format.
Shorting up the SponsorsList.xls file makes for slightly faster tests. seems senseless to load 5000 everytime we reset the data.
Tried to test all of this carefully in the test_study_details_documents.py test.
2020-04-29 15:08:11 -04:00
|
|
|
# Make a display name out of categories
|
|
|
|
name_list = []
|
|
|
|
for cat_key in ['category1', 'category2', 'category3']:
|
2021-07-06 13:10:20 -04:00
|
|
|
if doc[cat_key] not in ['', 'NULL', None]:
|
Refactor the document details scripts. Now there is one script, it returns data in a consistent format, and has all the details required. The script is located in StudyInfo, with the argument documents. Make note that it returns a dictionary of ALL the documents, with a field to mark which ones are required according to the protocol builder. Others may become required if a workflow determines such, in which case the workflow will enforce this, and the document will have a count > 0, and additional details in a list of files within the document. I modified the XLS file to use lower case variable names, because it disturbed me, and we have to reference them frequently. Removed devious "as_object" variable on get_required_docs, so it behaves like the other methods all the time, and returns a dictionary. All the core business logic for finding the documents list now resides in the StudyService.
Because this changes the endpoint for all existing document details, I've modified all the test and static bpmn files to use the new format.
Shorting up the SponsorsList.xls file makes for slightly faster tests. seems senseless to load 5000 everytime we reset the data.
Tried to test all of this carefully in the test_study_details_documents.py test.
2020-04-29 15:08:11 -04:00
|
|
|
name_list.append(doc[cat_key])
|
|
|
|
doc['display_name'] = ' / '.join(name_list)
|
2020-04-23 19:25:01 -04:00
|
|
|
|
2022-02-24 14:25:42 -05:00
|
|
|
|
2020-04-23 19:25:01 -04:00
|
|
|
# For each file, get associated workflow status
|
2022-02-24 14:25:42 -05:00
|
|
|
doc_files = list(filter(lambda f: f.irb_doc_code == code, study_files))
|
|
|
|
# doc_files = UserFileService.get_files_for_study(study_id=study_id, irb_doc_code=code)
|
Refactor the document details scripts. Now there is one script, it returns data in a consistent format, and has all the details required. The script is located in StudyInfo, with the argument documents. Make note that it returns a dictionary of ALL the documents, with a field to mark which ones are required according to the protocol builder. Others may become required if a workflow determines such, in which case the workflow will enforce this, and the document will have a count > 0, and additional details in a list of files within the document. I modified the XLS file to use lower case variable names, because it disturbed me, and we have to reference them frequently. Removed devious "as_object" variable on get_required_docs, so it behaves like the other methods all the time, and returns a dictionary. All the core business logic for finding the documents list now resides in the StudyService.
Because this changes the endpoint for all existing document details, I've modified all the test and static bpmn files to use the new format.
Shorting up the SponsorsList.xls file makes for slightly faster tests. seems senseless to load 5000 everytime we reset the data.
Tried to test all of this carefully in the test_study_details_documents.py test.
2020-04-29 15:08:11 -04:00
|
|
|
doc['count'] = len(doc_files)
|
|
|
|
doc['files'] = []
|
2020-04-23 19:25:01 -04:00
|
|
|
|
2022-02-24 14:25:42 -05:00
|
|
|
|
2021-09-21 14:36:57 -04:00
|
|
|
for file_model in doc_files:
|
2022-04-20 11:16:07 -04:00
|
|
|
file = File.from_file_model(file_model, [])
|
2021-09-21 14:36:57 -04:00
|
|
|
file_data = FileSchema().dump(file)
|
|
|
|
del file_data['document']
|
2021-05-14 15:52:25 -04:00
|
|
|
doc['files'].append(Box(file_data))
|
Refactor the document details scripts. Now there is one script, it returns data in a consistent format, and has all the details required. The script is located in StudyInfo, with the argument documents. Make note that it returns a dictionary of ALL the documents, with a field to mark which ones are required according to the protocol builder. Others may become required if a workflow determines such, in which case the workflow will enforce this, and the document will have a count > 0, and additional details in a list of files within the document. I modified the XLS file to use lower case variable names, because it disturbed me, and we have to reference them frequently. Removed devious "as_object" variable on get_required_docs, so it behaves like the other methods all the time, and returns a dictionary. All the core business logic for finding the documents list now resides in the StudyService.
Because this changes the endpoint for all existing document details, I've modified all the test and static bpmn files to use the new format.
Shorting up the SponsorsList.xls file makes for slightly faster tests. seems senseless to load 5000 everytime we reset the data.
Tried to test all of this carefully in the test_study_details_documents.py test.
2020-04-29 15:08:11 -04:00
|
|
|
# update the document status to match the status of the workflow it is in.
|
A major refactor of how we search and store files, as there was a lot of confusing bits in here.
From an API point of view you can do the following (and only the following)
/files?workflow_spec_id=x
* You can find all files associated with a workflow_spec_id, and add a file with a workflow_spec_id
/files?workflow_id=x
* You can find all files associated with a workflow_id, and add a file that is directly associated with the workflow
/files?workflow_id=x&form_field_key=y
* You can find all files associated with a form element on a running workflow, and add a new file.
Note: you can add multiple files to the same form_field_key, IF they have different file names. If the same name, the original file is archived,
and the new file takes its place.
The study endpoints always return a list of the file metadata associated with the study. Removed /studies-files, but there is an
endpoint called
/studies/all - that returns all the studies in the system, and does include their files.
On a deeper level:
The File model no longer contains:
- study_id,
- task_id,
- form_field_key
Instead, if the file is associated with workflow - then that is the one way it is connected to the study, and we use this relationship to find files for a study.
A file is never associated with a task_id, as these change when the workflow is reloaded.
The form_field_key must match the irb_doc_code, so when requesting files for a form field, we just look up the irb_doc_code.
2020-05-28 08:27:26 -04:00
|
|
|
if 'status' not in doc or doc['status'] is None:
|
2021-11-29 17:48:05 -05:00
|
|
|
status = session.query(WorkflowModel.status).filter_by(id=file.workflow_id).scalar()
|
|
|
|
doc['status'] = status.value
|
2020-04-23 19:25:01 -04:00
|
|
|
|
Refactor the document details scripts. Now there is one script, it returns data in a consistent format, and has all the details required. The script is located in StudyInfo, with the argument documents. Make note that it returns a dictionary of ALL the documents, with a field to mark which ones are required according to the protocol builder. Others may become required if a workflow determines such, in which case the workflow will enforce this, and the document will have a count > 0, and additional details in a list of files within the document. I modified the XLS file to use lower case variable names, because it disturbed me, and we have to reference them frequently. Removed devious "as_object" variable on get_required_docs, so it behaves like the other methods all the time, and returns a dictionary. All the core business logic for finding the documents list now resides in the StudyService.
Because this changes the endpoint for all existing document details, I've modified all the test and static bpmn files to use the new format.
Shorting up the SponsorsList.xls file makes for slightly faster tests. seems senseless to load 5000 everytime we reset the data.
Tried to test all of this carefully in the test_study_details_documents.py test.
2020-04-29 15:08:11 -04:00
|
|
|
documents[code] = doc
|
2021-05-14 15:52:25 -04:00
|
|
|
return Box(documents)
|
2020-04-23 19:25:01 -04:00
|
|
|
|
2021-07-06 13:10:20 -04:00
|
|
|
@staticmethod
|
|
|
|
def get_investigator_dictionary():
|
2022-02-02 12:59:56 -05:00
|
|
|
lookup_model = LookupService.get_lookup_model_for_reference(StudyService.INVESTIGATOR_LIST, 'code', 'label')
|
2021-07-06 13:10:20 -04:00
|
|
|
doc_dict = {}
|
|
|
|
for lookup_data in lookup_model.dependencies:
|
|
|
|
doc_dict[lookup_data.value] = lookup_data.data
|
|
|
|
return doc_dict
|
2020-04-23 19:25:01 -04:00
|
|
|
|
2020-05-07 13:57:24 -04:00
|
|
|
@staticmethod
|
2020-07-06 12:09:21 -04:00
|
|
|
def get_investigators(study_id, all=False):
|
2020-07-07 17:16:33 -04:00
|
|
|
"""Convert array of investigators from protocol builder into a dictionary keyed on the type. """
|
2020-05-07 13:57:24 -04:00
|
|
|
|
|
|
|
# Loop through all known investigator types as set in the reference file
|
2021-07-06 13:10:20 -04:00
|
|
|
inv_dictionary = StudyService.get_investigator_dictionary()
|
2020-05-07 13:57:24 -04:00
|
|
|
|
|
|
|
# Get PB required docs
|
|
|
|
pb_investigators = ProtocolBuilderService.get_investigators(study_id=study_id)
|
|
|
|
|
2020-07-07 17:16:33 -04:00
|
|
|
# It is possible for the same type to show up more than once in some circumstances, in those events
|
|
|
|
# append a counter to the name.
|
|
|
|
investigators = {}
|
2020-05-07 13:57:24 -04:00
|
|
|
for i_type in inv_dictionary:
|
2020-07-07 17:16:33 -04:00
|
|
|
pb_data_entries = list(item for item in pb_investigators if item['INVESTIGATORTYPE'] == i_type)
|
|
|
|
entry_count = 0
|
|
|
|
investigators[i_type] = copy(inv_dictionary[i_type])
|
|
|
|
investigators[i_type]['user_id'] = None
|
|
|
|
for pb_data in pb_data_entries:
|
|
|
|
entry_count += 1
|
|
|
|
if entry_count == 1:
|
|
|
|
t = i_type
|
|
|
|
else:
|
|
|
|
t = i_type + "_" + str(entry_count)
|
|
|
|
investigators[t] = copy(inv_dictionary[i_type])
|
|
|
|
investigators[t]['user_id'] = pb_data["NETBADGEID"]
|
|
|
|
investigators[t].update(StudyService.get_ldap_dict_if_available(pb_data["NETBADGEID"]))
|
2020-07-06 12:09:21 -04:00
|
|
|
if not all:
|
2020-07-07 17:16:33 -04:00
|
|
|
investigators = dict(filter(lambda elem: elem[1]['user_id'] is not None, investigators.items()))
|
|
|
|
return investigators
|
2020-05-07 13:57:24 -04:00
|
|
|
|
|
|
|
@staticmethod
|
|
|
|
def get_ldap_dict_if_available(user_id):
|
|
|
|
try:
|
2020-06-04 14:59:36 -04:00
|
|
|
return LdapSchema().dump(LdapService().user_info(user_id))
|
2020-05-07 13:57:24 -04:00
|
|
|
except ApiError as ae:
|
|
|
|
app.logger.info(str(ae))
|
|
|
|
return {"error": str(ae)}
|
|
|
|
except LDAPSocketOpenError:
|
|
|
|
app.logger.info("Failed to connect to LDAP Server.")
|
|
|
|
return {}
|
Refactor the document details scripts. Now there is one script, it returns data in a consistent format, and has all the details required. The script is located in StudyInfo, with the argument documents. Make note that it returns a dictionary of ALL the documents, with a field to mark which ones are required according to the protocol builder. Others may become required if a workflow determines such, in which case the workflow will enforce this, and the document will have a count > 0, and additional details in a list of files within the document. I modified the XLS file to use lower case variable names, because it disturbed me, and we have to reference them frequently. Removed devious "as_object" variable on get_required_docs, so it behaves like the other methods all the time, and returns a dictionary. All the core business logic for finding the documents list now resides in the StudyService.
Because this changes the endpoint for all existing document details, I've modified all the test and static bpmn files to use the new format.
Shorting up the SponsorsList.xls file makes for slightly faster tests. seems senseless to load 5000 everytime we reset the data.
Tried to test all of this carefully in the test_study_details_documents.py test.
2020-04-29 15:08:11 -04:00
|
|
|
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 08:00:16 -04:00
|
|
|
@staticmethod
|
2022-03-18 15:27:45 -04:00
|
|
|
@timeit
|
2022-02-08 15:03:00 -05:00
|
|
|
def synch_with_protocol_builder_if_enabled(user, specs):
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 08:00:16 -04:00
|
|
|
"""Assures that the studies we have locally for the given user are
|
|
|
|
in sync with the studies available in protocol builder. """
|
2020-05-22 14:37:49 -04:00
|
|
|
|
2020-05-26 23:18:14 -04:00
|
|
|
if ProtocolBuilderService.is_enabled():
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 08:00:16 -04:00
|
|
|
|
2020-05-26 23:18:14 -04:00
|
|
|
app.logger.info("The Protocol Builder is enabled. app.config['PB_ENABLED'] = " +
|
|
|
|
str(app.config['PB_ENABLED']))
|
|
|
|
|
|
|
|
# Get studies matching this user from Protocol Builder
|
2021-12-06 14:37:42 -05:00
|
|
|
pb_studies: List[ProtocolBuilderCreatorStudy] = ProtocolBuilderService.get_studies(user.uid)
|
2020-05-26 23:18:14 -04:00
|
|
|
|
|
|
|
# Get studies from the database
|
|
|
|
db_studies = session.query(StudyModel).filter_by(user_uid=user.uid).all()
|
|
|
|
|
|
|
|
# Update all studies from the protocol builder, create new studies as needed.
|
2020-08-17 14:56:00 -04:00
|
|
|
# Further assures that every active study (that does exist in the protocol builder)
|
2020-05-26 23:18:14 -04:00
|
|
|
# has a reference to every available workflow (though some may not have started yet)
|
|
|
|
for pb_study in pb_studies:
|
2022-03-18 15:27:45 -04:00
|
|
|
try:
|
|
|
|
if pb_study.DATELASTMODIFIED:
|
|
|
|
last_modified = parser.parse(pb_study.DATELASTMODIFIED)
|
|
|
|
else:
|
|
|
|
last_modified = parser.parse(pb_study.DATECREATED)
|
|
|
|
if last_modified.date() < StudyService.PB_MIN_DATE.date():
|
|
|
|
continue
|
|
|
|
except Exception as e:
|
|
|
|
# Last modified is null or undefined. Don't import it.
|
|
|
|
continue
|
2020-08-17 14:56:00 -04:00
|
|
|
new_status = None
|
2021-12-10 16:00:33 -05:00
|
|
|
new_progress_status = None
|
2022-02-10 18:19:57 -05:00
|
|
|
db_study = session.query(StudyModel).filter(StudyModel.id == pb_study.STUDYID).first()
|
|
|
|
#db_study = next((s for s in db_studies if s.id == pb_study.STUDYID), None)
|
|
|
|
|
2022-03-17 14:29:38 -04:00
|
|
|
add_study = False
|
2020-05-26 23:18:14 -04:00
|
|
|
if not db_study:
|
|
|
|
db_study = StudyModel(id=pb_study.STUDYID)
|
2020-08-17 14:56:00 -04:00
|
|
|
db_study.status = None # Force a new sa
|
|
|
|
new_status = StudyStatus.in_progress
|
2021-12-10 16:00:33 -05:00
|
|
|
new_progress_status = ProgressStatus.in_progress
|
|
|
|
|
2022-03-17 15:18:49 -04:00
|
|
|
# we use add_study below to determine whether we add the study to the session
|
2022-03-17 14:29:38 -04:00
|
|
|
add_study = True
|
2020-05-26 23:18:14 -04:00
|
|
|
db_studies.append(db_study)
|
2020-08-17 14:56:00 -04:00
|
|
|
|
2021-12-06 14:37:42 -05:00
|
|
|
db_study.update_from_protocol_builder(pb_study, user.uid)
|
2022-02-09 12:03:45 -05:00
|
|
|
StudyService.add_all_workflow_specs_to_study(db_study, specs)
|
2020-05-26 23:18:14 -04:00
|
|
|
|
2020-08-17 14:56:00 -04:00
|
|
|
# If there is a new automatic status change and there isn't a manual change in place, record it.
|
|
|
|
if new_status and db_study.status != StudyStatus.hold:
|
|
|
|
db_study.status = new_status
|
2021-12-10 16:00:33 -05:00
|
|
|
# make sure status is `in_progress`, before processing new automatic progress_status.
|
|
|
|
if new_progress_status and db_study.status == StudyStatus.in_progress:
|
|
|
|
db_study.progress_status = new_progress_status
|
2020-08-17 14:56:00 -04:00
|
|
|
StudyService.add_study_update_event(db_study,
|
|
|
|
status=new_status,
|
|
|
|
event_type=StudyEventType.automatic)
|
2022-03-17 15:18:49 -04:00
|
|
|
# we moved session.add here so that it comes after we update the study
|
|
|
|
# we only add if it doesnt already exist in the DB
|
2022-03-17 14:29:38 -04:00
|
|
|
if add_study:
|
|
|
|
session.add(db_study)
|
2020-08-17 14:56:00 -04:00
|
|
|
|
2020-05-26 23:18:14 -04:00
|
|
|
# Mark studies as inactive that are no longer in Protocol Builder
|
|
|
|
for study in db_studies:
|
|
|
|
pb_study = next((pbs for pbs in pb_studies if pbs.STUDYID == study.id), None)
|
2020-08-17 14:56:00 -04:00
|
|
|
if not pb_study and study.status != StudyStatus.abandoned:
|
2020-07-30 21:03:11 -06:00
|
|
|
study.status = StudyStatus.abandoned
|
2020-08-17 14:56:00 -04:00
|
|
|
StudyService.add_study_update_event(study,
|
|
|
|
status=StudyStatus.abandoned,
|
|
|
|
event_type=StudyEventType.automatic)
|
2020-05-26 23:18:14 -04:00
|
|
|
|
|
|
|
db.session.commit()
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 08:00:16 -04:00
|
|
|
|
2020-08-17 14:56:00 -04:00
|
|
|
@staticmethod
|
|
|
|
def add_study_update_event(study, status, event_type, user_uid=None, comment=''):
|
|
|
|
study_event = StudyEvent(study=study,
|
|
|
|
status=status,
|
|
|
|
event_type=event_type,
|
|
|
|
user_uid=user_uid,
|
|
|
|
comment=comment)
|
|
|
|
db.session.add(study_event)
|
|
|
|
db.session.commit()
|
|
|
|
|
2022-03-15 13:21:58 -04:00
|
|
|
@staticmethod
|
2022-03-16 12:49:35 -04:00
|
|
|
def _update_status_of_category_meta(status, cat):
|
|
|
|
cat_meta = CategoryMetadata()
|
2022-05-16 16:42:02 -04:00
|
|
|
if status.get(cat.id):
|
2022-03-16 12:49:35 -04:00
|
|
|
cat_meta.id = cat.id
|
2022-05-16 16:42:02 -04:00
|
|
|
cat_meta.state = WorkflowState[status.get(cat.id)['status']].value
|
|
|
|
if 'message' in status.get(cat.id):
|
|
|
|
cat_meta.message = status.get(cat.id)['message']
|
2022-03-16 12:49:35 -04:00
|
|
|
return cat_meta
|
|
|
|
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 08:00:16 -04:00
|
|
|
@staticmethod
|
2022-02-09 08:50:00 -05:00
|
|
|
def add_all_workflow_specs_to_study(study_model: StudyModel, specs: List[WorkflowSpecInfo]):
|
2020-05-25 12:29:05 -04:00
|
|
|
existing_models = session.query(WorkflowModel).filter(WorkflowModel.study == study_model).all()
|
2022-02-10 11:50:31 -05:00
|
|
|
existing_spec_ids = list(map(lambda x: x.workflow_spec_id, existing_models))
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 08:00:16 -04:00
|
|
|
errors = []
|
2022-02-04 14:50:31 -05:00
|
|
|
for workflow_spec in specs:
|
2022-02-09 08:50:00 -05:00
|
|
|
if workflow_spec.id in existing_spec_ids:
|
|
|
|
continue
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 08:00:16 -04:00
|
|
|
try:
|
2020-05-25 12:29:05 -04:00
|
|
|
StudyService._create_workflow_model(study_model, workflow_spec)
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 08:00:16 -04:00
|
|
|
except WorkflowException as we:
|
2022-03-14 16:00:53 -04:00
|
|
|
errors.append(ApiError.from_workflow_exception("workflow_startup_exception", str(we), we))
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 08:00:16 -04:00
|
|
|
return errors
|
|
|
|
|
|
|
|
@staticmethod
|
2020-05-25 12:29:05 -04:00
|
|
|
def _create_workflow_model(study: StudyModel, spec):
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 08:00:16 -04:00
|
|
|
workflow_model = WorkflowModel(status=WorkflowStatus.not_started,
|
2020-05-25 12:29:05 -04:00
|
|
|
study=study,
|
2021-04-26 08:46:19 -04:00
|
|
|
user_id=None,
|
2020-05-04 10:57:09 -04:00
|
|
|
workflow_spec_id=spec.id,
|
2021-04-29 10:25:28 -04:00
|
|
|
last_updated=datetime.utcnow())
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 08:00:16 -04:00
|
|
|
session.add(workflow_model)
|
|
|
|
session.commit()
|
|
|
|
return workflow_model
|