This is a very preliminary design note, I want to validate the direction of this approach before digging in to details.
I am proposing the assessment model in Bridge 2 have the following properties:
A hierarchical structure of typed nodes. The assessment model in Bridge 2 is a hierarchical tree structure where each node in the assessment is typed using a meta typing system (not the Java types of the server’s implementation, and not types as defined by Swagger, but a meta model of type information that can be associated with any node in an assessment structure). This would allow us to ship some common assessment node types like survey or string question, with associated metadata about how the node should behave, but it would also be extensible by client developers without server-side development work.
Example. If client developers wanted to define a new kind of question for a survey, they could do so without server-side development work. If this definition became generally useful, the type model could be copied as a default to all studies in the system.
Every node in an assessment can be referenced in new assessments. A given node can have many parents, and even nodes that are only defined within an assessment could be promoted to be “root” assessments in their own right, or referenced as part of another assessment. (This has consequences for how these assessments are represented, queried, and retrieved).
This approach has pros and cons.
Pros:
Client developers currently maintain a lot of information about the internal structure and behavior of assessments that could be partially moved to the server (when it is independent of any specific platform implementation);
This approach requires less server-side development work to maintain evolution of our assessments;
Structures that we have desired in Bridge, such as survey sections, or screens with multiple form inputs, are trivially added to the system with this implementation (the types need to be defined and authoring tools have to be augmented to construct these new assessment structures);
Any assessment can be reconstructed into a new assessment (compound assessments are the default in this design structure);
Cons:
structures may raise some performance issues when we want to query them. However, apps could bundle all these structures when they are released to the App Store; we do not necessarily need to query the server over and over for them, since we want to move away from supporting in flight changes to running studies.
Data boundaries and data validation become more difficult, though we are deprecating our current support for validating uploaded data.
Any meta model is more difficult to implement and understand. However, we have built these with schemas and survey constraints. (The model proposed here would replace survey constraints.)
Assessment Nodes (/v1/assessments/*)
Colloquially we can refer to an assessment as any assessment node that has been marked as a “root” node (ie it is visible in the API as an assessment, and is a thing that would be directly scheduled for participants). All other nodes in an assessment can be referred to as assessment nodes, although the root is a similar kind of record.
The data in an assessment node would be as follows:
Field | Data type | Notes |
---|---|---|
studyId | String | Like all models, these are scoped and can vary between studies, although each study should be populated with a set of default assessment node types |
internalLabel | String | The label of this assessment when shown to study designers and implementers. |
internalDescription | String | The description of this assessment when show to study designers and implementers. It might initially be copied from module information, but could then be changed. |
createdOn, modifiedOn | DateTime, DateTime | |
moduleId, moduleVersion | String, Integer | References to a shared module from which this assessment tree was copied into a study. (Some metadata about the assessment should probably be retrieved from this module.) |
deleted | boolean | Assessments can be logically deleted if they are not referenced in any other assessment |
guid | String | |
root | boolean | Should this assessment node appear in lists of assessments as presented to study designers? This isn’t determinable from having no parents. |
label | String | A descriptor of the assessment |
labelDetail | String | A longer description of the assessment |
prompt | String | |
promptDetail | String | Probably a “learn more” feature |
image | Image | The metadata to load an image via HTTP |
beforeRules, afterRules | AssessmentRule | Similar to rules currently defined in survey elements, rules for navigating an assessment tree can be defined on any node in the tree |
children | List<Assessment> | An ordered list of child assessment nodes |
copyrightNotice | String | |
version | Long | optimistic locking version |
typeId | String | The type of this assessment node (see below) |
typeMetadata | Map<String,Object> | Metadata that can be defined for this node based on its type (see below) |
Assessment Node Types (/v1/assessments/types/*)
An assessment node type defines a string identifier of the node’s type, and the metadata that can be collected for any node of that type.
Example. The string question node type has certain defined properties. It can only be included as a child in certain other node types (a survey or a form). It can collect minLength
, maxLength
, pattern
, patternErrorMessage
and patternPlaceholder
metadata that can be used to validate data in the UI. If you were to add this type to a node that was not in a survey or form, or included a key in the metadata map that was not listed in the metadata for this type, or failed to include a required key in the map, the server would throw a validation error.
This type system does not support inheritance and does not support cascading from parent to child nodes (e.g. setting a type on the root does not make that type’s metadata fields available to sub-nodes). This is simpler to implement and understand, but if a new property needs to be made globally available across all types, it is also a significant limitation.
Field | Data Type | Notes |
---|---|---|
identifier | String | A primary key for this type within a study |
studyId | String | The study this type is defined in |
label | String | A label for the type |
definitions | List<AssessmentTypeMetadataEntryDefinition> | A list of definitions guiding validation of metadata for this node type (see below) |
Assessment Type Metadata Entry Definitions
To provide validation of assessment authoring, we can provide constraints for the data that is allowed in a metadata map in a given node, once it is set to a given type:
Field | Data Type | Notes |
---|---|---|
label | String | Human readable explanation of the field’s value |
identifier | String | The key value to use when storing this metadata in the metadata map |
dataType | Enum (String, Number, Boolean) | The type of the value that can be entered under this key |
required | boolean | Are assessment authors required to supply this value if the node is typed with the given type? |
Note: the options you can select from when defining a multiple choice form field are technically metadata about that field, and those are complicated structures. They need to be represented here. They are an array of objects.
A study design UI would need to allow for authoring and managing assessment types as well as assessments themselves (perhaps both can also be imported from the shared module library, though I would expect to clone all the default assessment types into each new study when it is created).
Data boundaries and export
I would propose that all the data collected by an assessment (as a reminder, the root assessment node of an assessment definition) should be exported as a single dataset.
While the assessment definition drives the client presentation of the question and contains information about constraining input to ensure its validity, there would not be any server validation of the data uploaded by the client. [That’s already how it works though, now that we’ve gotten rid of schemas.]