2020-05-25 16:29:05 +00:00
|
|
|
from datetime import datetime
|
|
|
|
|
2020-02-26 23:06:51 +00:00
|
|
|
from flask import g
|
2020-03-09 19:12:40 +00:00
|
|
|
from sqlalchemy.exc import IntegrityError
|
2019-12-18 19:02:17 +00:00
|
|
|
|
2020-05-04 14:57:09 +00:00
|
|
|
from crc import session
|
2019-12-27 18:50:03 +00:00
|
|
|
from crc.api.common import ApiError, ApiErrorSchema
|
2020-05-25 16:29:05 +00:00
|
|
|
from crc.models.protocol_builder import ProtocolBuilderStatus
|
2021-02-24 17:05:06 +00:00
|
|
|
from crc.models.study import Study, StudyEvent, StudyEventType, StudyModel, StudySchema, StudyForUpdateSchema, \
|
2021-03-22 21:30:49 +00:00
|
|
|
StudyStatus, StudyAssociatedSchema
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 12:00:16 +00:00
|
|
|
from crc.services.study_service import StudyService
|
2020-07-27 18:38:57 +00:00
|
|
|
from crc.services.user_service import UserService
|
2021-01-29 19:05:07 +00:00
|
|
|
from crc.services.workflow_service import WorkflowService
|
2019-12-18 19:02:17 +00:00
|
|
|
|
|
|
|
|
2020-01-03 16:44:24 +00:00
|
|
|
def add_study(body):
|
2020-05-25 16:29:05 +00:00
|
|
|
"""Or any study like object. Body should include a title, and primary_investigator_id """
|
|
|
|
if 'primary_investigator_id' not in body:
|
|
|
|
raise ApiError("missing_pi", "Can't create a new study without a Primary Investigator.")
|
|
|
|
if 'title' not in body:
|
|
|
|
raise ApiError("missing_title", "Can't create a new study without a title.")
|
|
|
|
|
2020-07-27 18:38:57 +00:00
|
|
|
study_model = StudyModel(user_uid=UserService.current_user().uid,
|
2020-05-25 16:29:05 +00:00
|
|
|
title=body['title'],
|
|
|
|
primary_investigator_id=body['primary_investigator_id'],
|
2021-04-29 14:25:28 +00:00
|
|
|
last_updated=datetime.utcnow(),
|
2020-07-31 03:03:11 +00:00
|
|
|
status=StudyStatus.in_progress)
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 12:00:16 +00:00
|
|
|
session.add(study_model)
|
2020-08-17 18:56:00 +00:00
|
|
|
StudyService.add_study_update_event(study_model,
|
|
|
|
status=StudyStatus.in_progress,
|
|
|
|
event_type=StudyEventType.user,
|
|
|
|
user_uid=g.user.uid)
|
|
|
|
|
2020-05-25 16:29:05 +00:00
|
|
|
errors = StudyService._add_all_workflow_specs_to_study(study_model)
|
2020-01-14 16:45:12 +00:00
|
|
|
session.commit()
|
2021-05-04 17:39:49 +00:00
|
|
|
study = StudyService().get_study(study_model.id, do_status=True)
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 12:00:16 +00:00
|
|
|
study_data = StudySchema().dump(study)
|
2020-03-27 15:55:36 +00:00
|
|
|
study_data["errors"] = ApiErrorSchema(many=True).dump(errors)
|
|
|
|
return study_data
|
2020-01-03 16:44:24 +00:00
|
|
|
|
|
|
|
|
|
|
|
def update_study(study_id, body):
|
2020-08-17 18:56:00 +00:00
|
|
|
"""Pretty limited, but allows manual modifications to the study status """
|
2020-01-03 16:44:24 +00:00
|
|
|
if study_id is None:
|
2020-03-23 16:22:26 +00:00
|
|
|
raise ApiError('unknown_study', 'Please provide a valid Study ID.')
|
2020-01-03 16:44:24 +00:00
|
|
|
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 12:00:16 +00:00
|
|
|
study_model = session.query(StudyModel).filter_by(id=study_id).first()
|
|
|
|
if study_model is None:
|
2020-03-23 16:22:26 +00:00
|
|
|
raise ApiError('unknown_study', 'The study "' + study_id + '" is not recognized.')
|
2020-01-03 16:44:24 +00:00
|
|
|
|
2020-08-04 15:13:24 +00:00
|
|
|
study: Study = StudyForUpdateSchema().load(body)
|
2020-08-17 18:56:00 +00:00
|
|
|
|
|
|
|
status = StudyStatus(study.status)
|
2021-04-29 14:25:28 +00:00
|
|
|
study_model.last_updated = datetime.utcnow()
|
2020-08-17 18:56:00 +00:00
|
|
|
|
|
|
|
if study_model.status != status:
|
|
|
|
study_model.status = status
|
|
|
|
StudyService.add_study_update_event(study_model, status, StudyEventType.user,
|
|
|
|
user_uid=UserService.current_user().uid if UserService.has_user() else None,
|
|
|
|
comment='' if not hasattr(study, 'comment') else study.comment,
|
|
|
|
)
|
|
|
|
|
|
|
|
if status == StudyStatus.open_for_enrollment:
|
|
|
|
study_model.enrollment_date = study.enrollment_date
|
|
|
|
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 12:00:16 +00:00
|
|
|
session.add(study_model)
|
2020-01-14 16:45:12 +00:00
|
|
|
session.commit()
|
2021-01-29 19:05:07 +00:00
|
|
|
|
|
|
|
if status == StudyStatus.abandoned or status == StudyStatus.hold:
|
|
|
|
WorkflowService.process_workflows_for_cancels(study_id)
|
|
|
|
|
2020-08-04 15:13:24 +00:00
|
|
|
# Need to reload the full study to return it to the frontend
|
|
|
|
study = StudyService.get_study(study_id)
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 12:00:16 +00:00
|
|
|
return StudySchema().dump(study)
|
2020-01-03 16:44:24 +00:00
|
|
|
|
|
|
|
|
2021-04-30 15:55:12 +00:00
|
|
|
def get_study(study_id, update_status=False):
|
|
|
|
study = StudyService.get_study(study_id, do_status=update_status)
|
2020-05-25 16:29:05 +00:00
|
|
|
if (study is None):
|
2020-06-03 19:03:22 +00:00
|
|
|
raise ApiError("unknown_study", 'The study "' + study_id + '" is not recognized.', status_code=404)
|
2020-06-01 02:46:17 +00:00
|
|
|
return StudySchema().dump(study)
|
2019-12-18 19:02:17 +00:00
|
|
|
|
2021-03-22 21:30:49 +00:00
|
|
|
def get_study_associates(study_id):
|
|
|
|
return StudyService.get_study_associates(study_id)
|
|
|
|
|
2020-02-26 23:06:51 +00:00
|
|
|
|
2020-03-09 19:12:40 +00:00
|
|
|
def delete_study(study_id):
|
|
|
|
try:
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 12:00:16 +00:00
|
|
|
StudyService.delete_study(study_id)
|
2020-03-09 19:12:40 +00:00
|
|
|
except IntegrityError as ie:
|
|
|
|
session.rollback()
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 12:00:16 +00:00
|
|
|
message = "Failed to delete Study #%i due to an Integrity Error: %s" % (study_id, str(ie))
|
|
|
|
raise ApiError(code="study_integrity_error", message=message)
|
2020-03-09 19:12:40 +00:00
|
|
|
|
2020-02-27 15:30:16 +00:00
|
|
|
|
A major refactor of how we search and store files, as there was a lot of confusing bits in here.
From an API point of view you can do the following (and only the following)
/files?workflow_spec_id=x
* You can find all files associated with a workflow_spec_id, and add a file with a workflow_spec_id
/files?workflow_id=x
* You can find all files associated with a workflow_id, and add a file that is directly associated with the workflow
/files?workflow_id=x&form_field_key=y
* You can find all files associated with a form element on a running workflow, and add a new file.
Note: you can add multiple files to the same form_field_key, IF they have different file names. If the same name, the original file is archived,
and the new file takes its place.
The study endpoints always return a list of the file metadata associated with the study. Removed /studies-files, but there is an
endpoint called
/studies/all - that returns all the studies in the system, and does include their files.
On a deeper level:
The File model no longer contains:
- study_id,
- task_id,
- form_field_key
Instead, if the file is associated with workflow - then that is the one way it is connected to the study, and we use this relationship to find files for a study.
A file is never associated with a task_id, as these change when the workflow is reloaded.
The form_field_key must match the irb_doc_code, so when requesting files for a form field, we just look up the irb_doc_code.
2020-05-28 12:27:26 +00:00
|
|
|
def user_studies():
|
2020-05-22 18:37:49 +00:00
|
|
|
"""Returns all the studies associated with the current user. """
|
2020-07-27 18:38:57 +00:00
|
|
|
user = UserService.current_user(allow_admin_impersonate=True)
|
|
|
|
StudyService.synch_with_protocol_builder_if_enabled(user)
|
2021-07-09 14:37:57 +00:00
|
|
|
studies = StudyService().get_studies_for_user(user)
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 12:00:16 +00:00
|
|
|
results = StudySchema(many=True).dump(studies)
|
2020-02-28 16:14:30 +00:00
|
|
|
return results
|
2020-02-26 23:06:51 +00:00
|
|
|
|
2020-02-28 16:14:30 +00:00
|
|
|
|
A major refactor of how we search and store files, as there was a lot of confusing bits in here.
From an API point of view you can do the following (and only the following)
/files?workflow_spec_id=x
* You can find all files associated with a workflow_spec_id, and add a file with a workflow_spec_id
/files?workflow_id=x
* You can find all files associated with a workflow_id, and add a file that is directly associated with the workflow
/files?workflow_id=x&form_field_key=y
* You can find all files associated with a form element on a running workflow, and add a new file.
Note: you can add multiple files to the same form_field_key, IF they have different file names. If the same name, the original file is archived,
and the new file takes its place.
The study endpoints always return a list of the file metadata associated with the study. Removed /studies-files, but there is an
endpoint called
/studies/all - that returns all the studies in the system, and does include their files.
On a deeper level:
The File model no longer contains:
- study_id,
- task_id,
- form_field_key
Instead, if the file is associated with workflow - then that is the one way it is connected to the study, and we use this relationship to find files for a study.
A file is never associated with a task_id, as these change when the workflow is reloaded.
The form_field_key must match the irb_doc_code, so when requesting files for a form field, we just look up the irb_doc_code.
2020-05-28 12:27:26 +00:00
|
|
|
def all_studies():
|
|
|
|
"""Returns all studies (regardless of user) with submitted files"""
|
|
|
|
studies = StudyService.get_all_studies_with_files()
|
|
|
|
results = StudySchema(many=True).dump(studies)
|
2020-05-22 18:37:49 +00:00
|
|
|
return results
|