This is a very preliminary design note, I want to validate the direction of this approach before digging in to details.
I am proposing the assessment model in Bridge 2 have the following properties:
A hierarchical structure of typed nodes. The assessment model in Bridge 2 is a hierarchical tree structure where each node in the assessment is an Assessment or a subtype. Some common subtypes would be our existing server objects like Survey
or StringQuestion
(collapsing SurveyElement and the constraints for a string question). Some of these subtypes may simply be markers and some may contain additional fields (all question types would include additional fields, in favor of the constraints in our current implementation model).
Limited ability to extend the node type system. Assessments can be assigned a “type” field by authors and they can take a map of metadata as part of the instance, so that assessment authors can take an existing node type and give it additional information. Bridge does not consider this a new type but a UI or authoring tool may use this information to treat it as such, with the understanding that successful experiments in defining new node types will be incorporated into later versions of the Bridge domain model.
Every assessment node can be treated as a first order assessment. My simply making an assessment node as a “root,” it will show up in our assessment APIs as an assessment and can be made available for scheduling.
Every child node in an assessment tree can be included in other assessments. A given node can have many parents, allowing for the composition of new assessments.
This model provides three important benefits over the existing model: 1) it unifies all assessments under a common type model with future extensions available to all forms of assessments; 2) it is composable into new parts dynamically; and 3) it allows for the nesting of assessment elements (survey groups, forms with multiple controls on one screen, etc.)
The type system is not fully dynamic. I toyed with this. For example, an assessment of a given type could include a JSON schema of additional information that could be defined for that type. Or we could develop our own typing system, as we did with upload schemas. There are a few problems with this:
it would encourage unplanned extension of the type system to include UI and implementation-specific details that are not supportable by the Bridge server and not used across client platforms;
it would be difficult to use in environments where code generation is being used based on our API definitions to create client libraries (the libraries, like the type system, would be completely generic, providing little support for the author in terms of what they can expect to be supported by actual app libraries);
validation errors based on schema validation are notoriously opaque and difficult for API consumers to understand and fix, particularly if the authors of the app submitting the data are not the authors of the assessment type system.
Assessment Nodes (/v1/assessments/*)
All assessments are also nodes, but assessments that are marked as “roots” will appear in the assessments API and can be scheduled with other APIs. If an assessment node is treated as a root, it can just be referred to as an assessment if context warrants.
The data in an assessment node would be as follows:
Field | Data type | Notes |
---|---|---|
studyId | String | Like all models, these are scoped and can vary between studies, although each study should be populated with a set of default assessment node types |
internalLabel | String | The label of this assessment when shown to study designers and implementers. |
internalDescription | String | The description of this assessment when show to study designers and implementers. It might initially be copied from module information, but could then be changed. |
createdOn, modifiedOn | DateTime, DateTime | |
moduleId, moduleVersion | String, Integer | References to a shared module from which this assessment tree was copied into a study. (Some metadata about the assessment should probably be retrieved from this module.) |
deleted | boolean | Assessments can be logically deleted if they are not referenced in any other assessment |
guid | String | |
root | boolean | Should this assessment node appear in lists of assessments as presented to study designers? (This isn’t determinable from having no parents.) |
label | String | A descriptor of the assessment |
labelDetail | String | A longer description of the assessment |
prompt | String | |
promptDetail | String | Probably a “learn more” feature |
image | Image | The metadata to load an image via HTTP |
beforeRules, afterRules | AssessmentRule | Similar to rules currently defined in survey elements, rules for navigating an assessment tree can be defined on any node in the tree |
children | List<Assessment> | An ordered list of child assessment nodes |
copyrightNotice | String | |
version | Long | optimistic locking version |
metadata | Map<String,Object> | Metadata that can be defined for this node by assessment authors (has no defined meaning for Bridge and is not validated) |
Here is a very partial class hierarchy based on Bridge’s current domain support (note that Assessment
is not an abstract class and can be the root node without further sub-typing):
Data boundaries and export
I would propose that all the data collected by an assessment (as a reminder, the root assessment node of an assessment definition) should be exported as a single dataset.
While the assessment definition drives the client presentation of the question and contains information about constraining input to ensure its validity, there would not be any server validation of the data uploaded by the client. [That’s already how it works though, now that we’ve gotten rid of schemas.]