Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 12:00:16 +00:00
|
|
|
import marshmallow
|
|
|
|
from marshmallow import INCLUDE, fields
|
2019-12-31 21:32:47 +00:00
|
|
|
from marshmallow_enum import EnumField
|
2020-03-16 17:37:31 +00:00
|
|
|
from sqlalchemy import func
|
2019-12-31 21:32:47 +00:00
|
|
|
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 12:00:16 +00:00
|
|
|
from crc import db, ma
|
|
|
|
from crc.api.common import ApiErrorSchema
|
2020-06-01 01:15:40 +00:00
|
|
|
from crc.models.file import FileModel, SimpleFileSchema, FileSchema
|
2020-03-26 16:51:53 +00:00
|
|
|
from crc.models.protocol_builder import ProtocolBuilderStatus, ProtocolBuilderStudy
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 12:00:16 +00:00
|
|
|
from crc.models.workflow import WorkflowSpecCategoryModel, WorkflowState, WorkflowStatus, WorkflowSpecModel, \
|
|
|
|
WorkflowModel
|
2019-12-31 21:32:47 +00:00
|
|
|
|
|
|
|
|
|
|
|
class StudyModel(db.Model):
|
|
|
|
__tablename__ = 'study'
|
|
|
|
id = db.Column(db.Integer, primary_key=True)
|
|
|
|
title = db.Column(db.String)
|
|
|
|
last_updated = db.Column(db.DateTime(timezone=True), default=func.now())
|
|
|
|
protocol_builder_status = db.Column(db.Enum(ProtocolBuilderStatus))
|
2020-02-28 16:14:30 +00:00
|
|
|
primary_investigator_id = db.Column(db.String, nullable=True)
|
|
|
|
sponsor = db.Column(db.String, nullable=True)
|
|
|
|
hsr_number = db.Column(db.String, nullable=True)
|
|
|
|
ind_number = db.Column(db.String, nullable=True)
|
|
|
|
user_uid = db.Column(db.String, db.ForeignKey('user.uid'), nullable=False)
|
|
|
|
investigator_uids = db.Column(db.ARRAY(db.String), nullable=True)
|
|
|
|
requirements = db.Column(db.ARRAY(db.Integer), nullable=True)
|
2020-04-21 21:13:30 +00:00
|
|
|
on_hold = db.Column(db.Boolean, default=False)
|
2020-07-22 13:35:08 +00:00
|
|
|
enrollment_date = db.Column(db.DateTime(timezone=True), nullable=True)
|
2020-03-26 16:51:53 +00:00
|
|
|
|
|
|
|
def update_from_protocol_builder(self, pbs: ProtocolBuilderStudy):
|
|
|
|
self.hsr_number = pbs.HSRNUMBER
|
|
|
|
self.title = pbs.TITLE
|
|
|
|
self.user_uid = pbs.NETBADGEID
|
|
|
|
self.last_updated = pbs.DATE_MODIFIED
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 12:00:16 +00:00
|
|
|
|
2020-07-28 21:16:48 +00:00
|
|
|
self.protocol_builder_status = ProtocolBuilderStatus.active
|
2020-04-21 21:13:30 +00:00
|
|
|
if pbs.HSRNUMBER:
|
2020-07-28 21:16:48 +00:00
|
|
|
self.protocol_builder_status = ProtocolBuilderStatus.open
|
2020-04-21 21:13:30 +00:00
|
|
|
if self.on_hold:
|
2020-07-28 21:16:48 +00:00
|
|
|
self.protocol_builder_status = ProtocolBuilderStatus.hold
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 12:00:16 +00:00
|
|
|
|
|
|
|
|
|
|
|
class WorkflowMetadata(object):
|
2020-07-27 21:05:01 +00:00
|
|
|
def __init__(self, id, name = None, display_name = None, description = None, spec_version = None,
|
|
|
|
category_id = None, category_display_name = None, state: WorkflowState = None,
|
|
|
|
status: WorkflowStatus = None, total_tasks = None, completed_tasks = None,
|
|
|
|
display_order = None):
|
2020-03-30 18:01:57 +00:00
|
|
|
self.id = id
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 12:00:16 +00:00
|
|
|
self.name = name
|
|
|
|
self.display_name = display_name
|
|
|
|
self.description = description
|
2020-03-30 18:01:57 +00:00
|
|
|
self.spec_version = spec_version
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 12:00:16 +00:00
|
|
|
self.category_id = category_id
|
2020-07-21 19:18:08 +00:00
|
|
|
self.category_display_name = category_display_name
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 12:00:16 +00:00
|
|
|
self.state = state
|
2020-03-27 18:55:53 +00:00
|
|
|
self.status = status
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 12:00:16 +00:00
|
|
|
self.total_tasks = total_tasks
|
|
|
|
self.completed_tasks = completed_tasks
|
2020-04-09 18:25:14 +00:00
|
|
|
self.display_order = display_order
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 12:00:16 +00:00
|
|
|
|
2020-03-27 18:55:53 +00:00
|
|
|
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 12:00:16 +00:00
|
|
|
@classmethod
|
|
|
|
def from_workflow(cls, workflow: WorkflowModel):
|
|
|
|
instance = cls(
|
2020-03-30 18:01:57 +00:00
|
|
|
id=workflow.id,
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 12:00:16 +00:00
|
|
|
name=workflow.workflow_spec.name,
|
|
|
|
display_name=workflow.workflow_spec.display_name,
|
|
|
|
description=workflow.workflow_spec.description,
|
2020-05-29 00:03:50 +00:00
|
|
|
spec_version=workflow.spec_version(),
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 12:00:16 +00:00
|
|
|
category_id=workflow.workflow_spec.category_id,
|
2020-07-21 19:18:08 +00:00
|
|
|
category_display_name=workflow.workflow_spec.category.display_name,
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 12:00:16 +00:00
|
|
|
state=WorkflowState.optional,
|
|
|
|
status=workflow.status,
|
|
|
|
total_tasks=workflow.total_tasks,
|
2020-04-09 18:25:14 +00:00
|
|
|
completed_tasks=workflow.completed_tasks,
|
|
|
|
display_order=workflow.workflow_spec.display_order
|
|
|
|
)
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 12:00:16 +00:00
|
|
|
return instance
|
2020-03-27 18:55:53 +00:00
|
|
|
|
2019-12-31 21:32:47 +00:00
|
|
|
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 12:00:16 +00:00
|
|
|
class WorkflowMetadataSchema(ma.Schema):
|
|
|
|
state = EnumField(WorkflowState)
|
|
|
|
status = EnumField(WorkflowStatus)
|
2019-12-31 21:32:47 +00:00
|
|
|
class Meta:
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 12:00:16 +00:00
|
|
|
model = WorkflowMetadata
|
2020-03-30 18:01:57 +00:00
|
|
|
additional = ["id", "name", "display_name", "description",
|
2020-07-21 19:18:08 +00:00
|
|
|
"total_tasks", "completed_tasks", "display_order",
|
|
|
|
"category_id", "category_display_name"]
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 12:00:16 +00:00
|
|
|
unknown = INCLUDE
|
|
|
|
|
|
|
|
|
|
|
|
class Category(object):
|
|
|
|
def __init__(self, model: WorkflowSpecCategoryModel):
|
|
|
|
self.id = model.id
|
|
|
|
self.name = model.name
|
|
|
|
self.display_name = model.display_name
|
|
|
|
self.display_order = model.display_order
|
|
|
|
|
|
|
|
|
|
|
|
class CategorySchema(ma.Schema):
|
|
|
|
workflows = fields.List(fields.Nested(WorkflowMetadataSchema), dump_only=True)
|
|
|
|
class Meta:
|
|
|
|
model = Category
|
|
|
|
additional = ["id", "name", "display_name", "display_order"]
|
|
|
|
unknown = INCLUDE
|
|
|
|
|
2019-12-31 21:32:47 +00:00
|
|
|
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 12:00:16 +00:00
|
|
|
class Study(object):
|
|
|
|
|
2020-05-25 16:29:05 +00:00
|
|
|
def __init__(self, title, last_updated, primary_investigator_id, user_uid,
|
|
|
|
id=None,
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 12:00:16 +00:00
|
|
|
protocol_builder_status=None,
|
2020-06-01 01:15:40 +00:00
|
|
|
sponsor="", hsr_number="", ind_number="", categories=[],
|
2020-07-22 13:35:08 +00:00
|
|
|
files=[], approvals=[], enrollment_date=None, **argsv):
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 12:00:16 +00:00
|
|
|
self.id = id
|
|
|
|
self.user_uid = user_uid
|
|
|
|
self.title = title
|
|
|
|
self.last_updated = last_updated
|
|
|
|
self.protocol_builder_status = protocol_builder_status
|
|
|
|
self.primary_investigator_id = primary_investigator_id
|
|
|
|
self.sponsor = sponsor
|
|
|
|
self.hsr_number = hsr_number
|
|
|
|
self.ind_number = ind_number
|
|
|
|
self.categories = categories
|
2020-06-01 02:46:17 +00:00
|
|
|
self.approvals = approvals
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 12:00:16 +00:00
|
|
|
self.warnings = []
|
2020-06-01 01:15:40 +00:00
|
|
|
self.files = files
|
2020-07-22 13:35:08 +00:00
|
|
|
self.enrollment_date = enrollment_date
|
2020-04-22 19:37:02 +00:00
|
|
|
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 12:00:16 +00:00
|
|
|
@classmethod
|
|
|
|
def from_model(cls, study_model: StudyModel):
|
2020-05-25 16:29:05 +00:00
|
|
|
id = study_model.id # Just read some value, in case the dict expired, otherwise dict may be empty.
|
|
|
|
args = dict((k, v) for k, v in study_model.__dict__.items() if not k.startswith('_'))
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 12:00:16 +00:00
|
|
|
instance = cls(**args)
|
|
|
|
return instance
|
|
|
|
|
|
|
|
def update_model(self, study_model: StudyModel):
|
|
|
|
for k,v in self.__dict__.items():
|
|
|
|
if not k.startswith('_'):
|
|
|
|
study_model.__dict__[k] = v
|
|
|
|
|
|
|
|
def model_args(self):
|
|
|
|
"""Arguments that can be passed into the Study Model to update it."""
|
|
|
|
self_dict = self.__dict__.copy()
|
|
|
|
del self_dict["categories"]
|
|
|
|
del self_dict["warnings"]
|
|
|
|
return self_dict
|
|
|
|
|
|
|
|
|
|
|
|
class StudySchema(ma.Schema):
|
|
|
|
|
2020-05-25 16:29:05 +00:00
|
|
|
id = fields.Integer(required=False, allow_none=True)
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 12:00:16 +00:00
|
|
|
categories = fields.List(fields.Nested(CategorySchema), dump_only=True)
|
|
|
|
warnings = fields.List(fields.Nested(ApiErrorSchema), dump_only=True)
|
2020-01-03 16:44:24 +00:00
|
|
|
protocol_builder_status = EnumField(ProtocolBuilderStatus)
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 12:00:16 +00:00
|
|
|
hsr_number = fields.String(allow_none=True)
|
2020-05-25 16:29:05 +00:00
|
|
|
sponsor = fields.String(allow_none=True)
|
|
|
|
ind_number = fields.String(allow_none=True)
|
2020-06-01 04:00:52 +00:00
|
|
|
files = fields.List(fields.Nested(FileSchema), dump_only=True)
|
2020-06-01 02:46:17 +00:00
|
|
|
approvals = fields.List(fields.Nested('ApprovalSchema'), dump_only=True)
|
2020-07-22 13:35:08 +00:00
|
|
|
enrollment_date = fields.Date(allow_none=True)
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 12:00:16 +00:00
|
|
|
|
|
|
|
class Meta:
|
|
|
|
model = Study
|
|
|
|
additional = ["id", "title", "last_updated", "primary_investigator_id", "user_uid",
|
2020-07-22 13:35:08 +00:00
|
|
|
"sponsor", "ind_number", "approvals", "files", "enrollment_date"]
|
Created a "StudyService" and moved all complex logic around study manipulation out of the study api, and this service, as things were getting complicated. The Workflow Processor no longer creates the WorkflowModel, the study object handles that, and only passes the model into the workflow processor when it is ready to start the workflow.
Created a Study object (seperate from the StudyModel) that can cronstructed on request, and contains a different data structure than we store in the DB. This allows us to return underlying Categories and Workflows in a clean way.
Added a new status to workflows called "not_started", meaning we have not yet instantiated a processor or created a BPMN, they have no version yet and no stored data, just the possiblity of being started.
The Top Level Workflow or "Master" workflow is now a part of the sample data, and loaded at all times.
Removed the ability to "add a workflow to a study" and "remove a workflow from a study", a study contains all possible workflows by definition.
Example data no longer creates users or studies, it just creates the specs.
2020-03-30 12:00:16 +00:00
|
|
|
unknown = INCLUDE
|
|
|
|
|
|
|
|
@marshmallow.post_load
|
|
|
|
def make_study(self, data, **kwargs):
|
|
|
|
"""Can load the basic study data for updates to the database, but categories are write only"""
|
2020-04-09 18:25:14 +00:00
|
|
|
return Study(**data)
|
2020-05-20 21:10:22 +00:00
|
|
|
|