54 Commits

Author SHA1 Message Date
Dan
0a906a4b3c Cleaning up Print Statements (it was making it hard to see what was happening)
The Jinja2 service was treating all the task data as a possible template, modified it to only include the referenced templates.
(This turned out not to be the problem, but it seems like a good idea to keep it in place)
There is a terrible bug with the wordwrap pipe that will die without any details if you pass it a value of None.  We now cature the terrible error, and replace it with a sensible one.
Removed an unused Jinja_extensions file.
2022-03-08 15:46:36 -05:00
Dan
635a112796 still trying to fix a rogue test. 2022-02-18 10:41:24 -05:00
Dan
6adf1107fe Trying to figure out why these files are not the same on testing. 2022-02-18 10:12:36 -05:00
Dan
b72ecb8375 Another re-work to fix 619 - and to assure that we aren't rebuilding the lookup tables too frequently. 2022-02-17 11:59:48 -05:00
Dan
f2b6008e5f Fixes 619 - look up models were being built incorrectly, and repeatedly, and sometimes bombed out all together.
bonus: resond with a valid error message when an invalid task_id is requested (don't just 500)
2022-02-17 11:04:50 -05:00
Dan
df3f67601c performance improvements. and last rements of load_example_data() 2022-02-09 23:29:39 -05:00
Dan
e9fd76ed99 lookup service tests passing, but I need to revist this. 2022-02-09 12:11:51 -05:00
alicia pritchett
38d64b1ffd fixes a workflow model related test
yes i call a service from a service whoops
2022-02-09 11:07:09 -05:00
mike cullerton
2e4bab9d04 Add TODO 2022-02-08 15:04:47 -05:00
mike cullerton
3f856355e2 Merge branch 'git-integration-596' into feature/spec_files_wthout_db
# Conflicts:
#	config/testing.py
#	crc/services/lookup_service.py
2022-02-08 10:38:03 -05:00
Dan
4ec6e403f5 1. Created a UserFileService, so it's clear what we use when for File Services, UserFiles, SpecFiles, and ReferenceFiles each function differently.
2. Reference Files and Spec Files are written to disk, they do not exist in the database at all.
2022-02-02 12:59:56 -05:00
mike cullerton
c07c429ae1 Fix for new version of Pandas. They check data type now. 2022-02-01 10:42:18 -05:00
Dan
4c00a5762f partial commit - new spec_file_service, and new spec_file_api endpoints that use spec and file name, not file id.
removed worklow_sync
cleaned up file and workflow models
most of the test are broken.
2022-01-28 06:42:37 -05:00
Dan
8529465322 Removed the method get_spec_data_files completly - using get_spec_files and get_spec_data to get this information instead.
Only load the spec data files if you are creating a new workflow, otherwise just deserialize the json.
Removed the stuff about calculaing the version of the spec, as we don't use it.
2022-01-25 16:10:54 -05:00
Dan
f815add699 1. Add a default directory for the location of SYNC files.
2. Added a last_updated column to the lookup table
3. The Lookup service now uses the above, and compares it to the actual file date, we can then rebuild the lookup if needed.
4. That 755 migration loads up the models, so when you change the models, the migration starts to fail.  Not really sure what to do here, but modify the migration while we are in process.
2022-01-20 13:05:58 -05:00
mike cullerton
68820c67cb Removed (almost) all references to WorkflowSpecDependencyFile
(There is still a call in the lookup service, but we need to decide how to fix that)
2022-01-19 16:12:54 -05:00
mike cullerton
091d71eb0f Cleaned up code around differences between file info and file data
Cleaned up some api code around differences between file, spec_file, and reference_file
Cleaned up some api code around differences between file info and file data
Fixed some tests for file api
2022-01-19 13:47:14 -05:00
mike cullerton
b99ed73951 Remove unused imports 2022-01-12 15:00:26 -05:00
mike cullerton
cfa9f00bf3 *** WIP ***
Moved reference files to their own service
2022-01-12 14:37:33 -05:00
mike cullerton
9cc91f92c3 *** WIP ***
cleanup - removing commented code
2022-01-11 15:55:08 -05:00
mike cullerton
4df2ed6ce4 *** WIP ***
Failing tests, and missing functionality.
Committing to get stuff on Github.
2022-01-11 15:30:22 -05:00
mike cullerton
dc27f795c8 *** WIP ***
Committing because it is Friday afternoon, and my computer is acting flaky
2022-01-07 15:34:51 -05:00
mike cullerton
86a6039dc8 *** WIP ***
**Many** tests are failing!

Committing so I can merge dev into this branch
2022-01-06 11:46:54 -05:00
mike cullerton
cb77db26a3 Minor edit, for clarity 2021-11-16 12:05:20 -05:00
mike cullerton
9f18484ebb Grab exception when reading older xls spreadsheet into pandas
Renamed `xls` variable to `xlsx`, so it makes more sense
Added a hint to error_service for validation
2021-11-16 11:54:31 -05:00
Dan
d1eae3c15a Validation was failing for enum_label() expressions when called within a sub-process. Possible (but unlikely) that this would occur outside validation. 2021-11-09 12:55:06 -05:00
Dan
84ce24243f add an enum_label script that will return the label given a value selection. 2021-10-21 13:57:49 -04:00
Dan
5429e7da7d All enumerated lists used in web forms should contain a single value, not a dictionary of value/labels.
Removing the spreadsheet.value.column and data.value.column so we just have value.column for both.
Improving the __str__ function in the ApiError class, to make debugging a little easier.
Adding a "validate_all" flask command, to help us track down any issues with current workflows in production (use this in concert with sync_with_testing)
Fixed logs of tests.
removed fact_runner.py, a very early and crufty bit of code.
2021-10-19 10:13:43 -04:00
Dan
84680ea846 Fixing multiple issues that came out of Study Info, as we debugged issue #474 related to navigating back to a previous task.
There was a problem with the python script engine as well that wasn't handling the de-serialize properly and didn't correctly pick back up on the script engine, and the renaming of methods in PythonScriptEngine created some conflicts with the way we override functions.
We were not handling ldap looks up efficiently, and this was also breaking in Study Info.

Finally we had a bug in SpiffWorkflow that did not allow us to reset back to the previous task in some cases where nested call activities happen far later in the process and are currently active when the reset is created.
2021-10-06 12:17:57 -04:00
Dan
fb54edac1c Adding additional details to error messages, and cleaning up the cruft around these messages to keep them clear and succinct.
Most noteable is the addition of the line on which the error occurs for script tasks.  It will report the line number and pass back the content of
the line that failed.
The validator only returns the first error it encounters, as it's clear that all we ever get right now is two of the same error.
Did a lot of work between this and spiffworkflow to remove all the places where we obfuscate or drop details as we converted between workflowExceptions and APIExceptions.
Dropped the python levenshtein dependency, in favor of just rolling a simple one ourselves in Spiffworkflow.
2021-07-07 00:53:49 -04:00
Dan
1b1a994360 Refactoring Reference files to use the lookup table, rather than parsing the results directly out of the spreadsheet, or attempting to cache them.
Adding a DocumentService to clean up the FileService, and get Documents well seperated, as it seems likely be pulled out or seperated in the future, there is now a Documents api file as well, for the same reason.
Some other minor changes are just fixing white space to assure our code is linting correctly.
I removed _create_study_workflow_approvals from the base test, as we don't use approvals like this anymore.
2021-07-06 13:10:20 -04:00
Dan
07eb3f9ca8 Moving metrics into SpiffWorkflow so we can run the performance metrics deeply across both systems simultaniously.
Upgrading libraries.
Fixing deprication issue with Pandas and numpy.
We can only process xlsx files now, plain oldschool xls is fully removed.
2021-06-18 16:41:55 -04:00
Dan
c8f5a44050 adding a warning in the logger so we can see when expensive calls are made to rebuild searches for enumerations. But all looks good. 2021-03-02 12:21:51 -05:00
Dan
aac3d5c16e Bug #255, this requires the front end to pass in the name of the task, when doing a lookup. This will prevent a bug where we have multiple user tasks, with enum fields that set the same variable, but use different lookup tables to populate the dropbown or search feature. 2021-03-01 14:54:04 -05:00
Dan Funk
b544334f45 1. Updating Personnel BPMN diagram to debug some issues.
2. Disabling the token timeout for now, to see if this corrects the issues Alex is having with lost work.
3. Raising more thoughtful error messages for unknown lookup options.
4. Providing better validation of default values and injecting the correct value for defaults related to enum lists of all types.
5. Bumping Spiffworkflow library which contains some better error messages and checks.
2020-09-01 15:58:50 -04:00
Dan Funk
53d09303d8 Validating that field properties are valid - they must exist as constants on the Task model.
Making all the lookup field names consistent.
Fixing the lookup service which was failing at times trying to find the correct field to use for building the lookup table.
Updating validation to check for additional fields and properties.
When connexion level errors occur, wrapping it in an API Error to be consistent.
2020-08-27 14:00:14 -04:00
Dan Funk
9a5c1d7cfb I may have finally wrapped my head around full text search in python. Now properly using an index based on simple rather than english dictionary which has far fewer stop words and stemming processes and plays much better to the type ahead search we are trying to provide.
Stop words are no longer excluded, so "other" is a valid search and gets a result.
2020-08-13 18:13:41 -04:00
Dan Funk
9077ff3ebf It is not possible to use task_data for an auto-complete field. It's too expensive an operation to provide that feature on the backend, and the data already fully resides on the front end anyway. Task-data can be used to populate enum fields if needed, so it can populate dropdowns, radios and checkboxes, just not auto-complete. 2020-07-14 11:38:48 -04:00
Aaron Louie
463660f185 Merge branch 'dev' into feature/dynamic_enum_list 2020-07-13 17:47:56 -04:00
Aaron Louie
07066b8a16 Looks up enum options from task data 2020-07-13 17:46:28 -04:00
Dan Funk
9e29a43785 Correct for a race condition where multiple lookup tables are built for the same field and workflow specification, causing it to appear that the models are not updating correctly. 2020-07-13 12:45:51 -04:00
Aaron Louie
b7920989ed WIP: Adds Camunda property for retrieving enum field options from task data. 2020-07-10 14:48:38 -04:00
Dan Funk
84973d2351 resolving comments from pull request. 2020-06-30 12:24:48 -04:00
Dan Funk
93bf46354b A last minute change to make the API a little clearer and cleaner broke some tests. 2020-06-30 11:12:28 -04:00
Dan Funk
f183e12fe5 Provides some basic tools for getting additional information about a lookup field.
Adds an optional 'value' parameter to the lookup endpoint so you can find a specific entry in the lookup table.
Makes sure the data attribute returned on a lookup model is a dictionary, and not a string.
Fixes a previous bug that would crop up if double spaces were used when performing a search.
2020-06-30 10:34:16 -04:00
Dan Funk
d3ce1af1ce Provides some basic tools for getting additional information about a lookup field.
Adds an optional 'id' parameter to the lookup endpoint so you can find a specific entry in the lookup table.
Makes sure the data attribute returned on a lookup model is a dictionary, and not a string.
Fixes a previous bug that would crop up in double spaces were used when performing a search.
2020-06-30 10:00:22 -04:00
Dan Funk
fed6e86f92 Trying to fix LDAP issues on production. Changing LDAP to static only methods, caching the connection and calling bind before all connection requests.
Also assuring we don't load the documents.xls file over and over again.
2020-06-04 14:59:36 -04:00
Dan Funk
e102214809 minor cleanup of error codes. 2020-06-03 15:03:22 -04:00
Dan Funk
c7484267e1 For the main approval endpoints - we now group the approvals by study. So you get one record back for each study, but it may have other approvals along with it as "related_approvals".
We now cache the LDAP records - so we look in our own database for the record before calling out to ldap for the details when given a straight up computing id like dhf8r.

Added "date_approved" to the approval model.

And moved the approver and primary investigator into real associated models to make it easier to dump.

Fixed a problem with the validation that was causing it to throw incorrect errors on valid workflows. Getting it to behave a little more like the front end behaves, and respecting the read-only fields.  But it was mainly to do with always returning all the data with each form submission.
2020-06-02 18:17:00 -04:00
Dan Funk
11413838a7 Faster lookup fields. We were parsing the spec each time to get details about how to search. We're just grabbing the workflow id and task id now and building that straight into the full text search index for faster lookups. Should be peppy.
Another speed improvement - data in the FileDataModel is deferred, and not queried until it is specifically used, as the new data structures need to use this model frequently.
2020-05-29 01:39:39 -04:00