Initially we referred to this as compliance, but because “compliance” also has meaning in a governance context, we’re using “adherence” to describe the measurement of the extent to which a participant performs all the assessments they are asked to perform as part of their participation in a study.
In the v2 science APIs, Bridge has three APIs that are involved in schedule adherence:
The schedule itself, which is finite in length and described by a timeline;
The set of adherence records for the participant which describe what the participant has done in that timeline;
A set of event timestamps that tell us where the participant is at in the timeline (ie what the user should have done and what they should currently be doing in the timeline).
So far the Bridge server provides this information through separate APIs and does not keep track of the time of the participants. For the sake of efficiency, I think we want the Bridge server to combine this information and provide reports on the status of the account, which will be less resource intensive that serving out all the information to be processed by a client or worker process. The issues I see with this:
For the first time the server will need to have knowledge of the participant’s time and time zone;
The reports are (relatively) expensive to calculate and they are write seldom, read frequently in nature, so they will need to be cached.
API (Event Day Adherence Report)
Given the designs I have seen so far, I created the “event day” report which would be available via a single API:
Method | Path | Description |
---|---|---|
GET | /v5/studies/{studyId}/participants/{userId|self}/adherence/eventday |
The JSON looks as follows:
[ { "startEventId":"study_burst:ClinicVisit:01", "eventTimestamp":"2021-10-27T19:00:00.000Z", "entries":{ "1":[ { "sessionGuid":"vZBHBVv_H2_1TBbELF48czjS", "label": "Session #1", "symbol": "circle", "timeWindows":[ { "sessionInstanceGuid":"ePcCf6VmfIiVuU0ckdBeRw", "timeWindowGuid":"sUaNAasy_LiT3_IYa1Fx_dSv", "state":"not_yet_available", "type":"EventDayWindow" }, { "sessionInstanceGuid":"DB13D4mO72j6S-g7PIkI2Q", "timeWindowGuid":"Bw6rAAeG6zotqes4cLSgKjh5", "state":"not_yet_available", "type":"EventDayWindow" } ], "type":"EventDay" } ], "2":[ { "sessionGuid":"vZBHBVv_H2_1TBbELF48czjS", "label": "Session #1", "symbol": "circle", "timeWindows":[ { "sessionInstanceGuid":"wvEV4fJZQ0nfgY-TN2LekA", "timeWindowGuid":"sUaNAasy_LiT3_IYa1Fx_dSv", "state":"not_yet_available", "type":"EventDayWindow" }, { "sessionInstanceGuid":"IHDTSoj552vGDv1Qt7nXkg", "timeWindowGuid":"Bw6rAAeG6zotqes4cLSgKjh5", "state":"not_yet_available", "type":"EventDayWindow" } ], "type":"EventDay" } ] }, "type":"EventDayAdherenceReport" }, { "startEventId":"study_burst:ClinicVisit:02", "eventTimestamp":"2021-11-16T19:00:00.000Z", "entries":{ "1":[ { "sessionGuid":"vZBHBVv_H2_1TBbELF48czjS", "label": "Session #1", "symbol": "circle", "timeWindows":[ { "sessionInstanceGuid":"zk7X4dQCy7Nvnuo2PcnSCA", "timeWindowGuid":"sUaNAasy_LiT3_IYa1Fx_dSv", "state":"not_yet_available", "type":"EventDayWindow" }, { "sessionInstanceGuid":"rMRne-cbwIN5mkGZLymxzg", "timeWindowGuid":"Bw6rAAeG6zotqes4cLSgKjh5", "state":"not_yet_available", "type":"EventDayWindow" } ], "type":"EventDay" } ], "2":[ { "sessionGuid":"vZBHBVv_H2_1TBbELF48czjS", "label": "Session #1", "symbol": "circle", "timeWindows":[ { "sessionInstanceGuid":"QXM1cO6yb0gSPWzRwRD8eA", "timeWindowGuid":"sUaNAasy_LiT3_IYa1Fx_dSv", "state":"not_yet_available", "type":"EventDayWindow" }, { "sessionInstanceGuid":"hCXFevxbBnpaUYjH212dsQ", "timeWindowGuid":"Bw6rAAeG6zotqes4cLSgKjh5", "state":"not_yet_available", "type":"EventDayWindow" } ], "type":"EventDay" } ] }, "type":"EventDayAdherenceReport" } ]
All sessions in the timeline are combined in this view so that each entry is for one event (one entry for every event and only one entry), and then a list by day of the activities that the participant should perform as a result of that event. All potential events in the schedule are included in this report whether they exist for the user or not. Then the actual “days since each event” are calculated to determine what the state of these assessments are:
State | Description | Adherence |
---|---|---|
not_yet_available | Participant should not have seen or started this assessment. | N/A |
unstarted | Participant should see the assessment (they are being asked to do it now), but they have not started it. | unknown |
started | Participant has started the assessment. | unknown |
completed | Participant has completed the assessment before it expired and was removed from the UI. | compliant |
abandoned | Participant started the assessment but it expired before they finished it. | noncompliant |
ignored | Participant did not start the assessment; it is now expired. | noncompliant |
I would propose that a participant’s noncompliance percentage is equal to noncompliant / (compliant + noncompliant + unknown)
. We can then set a threshold at which we would want to intervene (from 0% — any failure gets reported — to something less stringent).
Adherence Event Day Report
Basic structure of an adherence record (per participant) is:
durationInDays (the highest endDay value calculated by the timeline, which can be lower that the schedule duration, but the schedule duration will cap any open-ended sequences);
Enum studyAdherence (compliant, not in compliance, some measure of this);
Map<Stream,List<SessionAdherence>> adherenceTimeline;
There’s nothing in this about event sequences that haven’t happened because they haven’t been triggered by an event. We might need to encode, in a schedule, what is desired in terms of event generation.
There’s nothing about individual sessions or assessments being optional. There’s no easy way to determine adherence for persistent time windows, so we’ll assume that performing such a window one time is compliant…maybe.
Stream
The timeline describes parts of scheduling that can be repeated, if the triggering events can change over time. Study bursts are also an example of repeating parts of scheduling. Each of these things represents a separate stream.
label (the session name, probably)
startEventId
eventTimestamp
session window GUID (each time window generates one stream)
SessionAdherence (might be possible to edit the state if coordinator knows state better). In visual designs, all time windows are grouped under each session (in our server model, each window creates a separate stream of activities…but we would want a set of tests). So we might have a set of WindowAdherence records, as below, in one session adherence container (not sure there’s much in the container).
state
- not yet available NOT_YET_AVAILABLE
- available but unstarted UNSTARTED
- started STARTED
- successfully completed COMPLETED
- expired unfinished ABANDONED
- expired never started IGNORED/NONCOMPLIANTinstanceGuid
sessionGuid
startDay
endDay
startTime
expiration
SessionAdherence needs to boil down the completed state of the session based on the timestamps, I think. User would be out of compliance if some of their states are expired. Should probably be possible to calculate these records and then store them so Bridge can deliver them. So a summary of the state of the record should probably also be saved for efficient querying.
Caching individual participant adherence reports
There are options:
Cache them with Redis (probably my least-preferred option since domain logic will be based on these reports);
Write them to a participant report either when requested, or on a nightly basis, and expose only reports through the APIs (treating reports as a caching mechanism);
Write the report back to the database (actually more performant than calculating in memory? Not proven). If we want paginated views of individual performance status, then we’ll need to define a schema for the base information and store it back to a table.
The reports only need to change when the user submits an event or an adherence record (or when the timeline changes, which would invalidate quite a lot). Typically we delete the cached item on these writes and let the next read re-create the cached item.
Protocol adherence
There is definitely a lot of possible ways we could measure adherence. However it can be measured as a proportion of compliant to non-compliant scheduled assessment states. We could then set a threshold for this (ie. warn administrators if there are any non-compliant assessments, vs. setting a proportion threshold that would need to be hit before notifying administrators). This can be calculated as part of the event day adherence report.
Arm and study summaries
Not sure what the requirements are for these, but worker processes would periodically (or when prompted by a message) calculate and store these. The logical place would be study reports (but again a SQL table is an option).
Messaging APIs
One UI design I have seen shows a list of notifications in an administrative UI, indicating specific participants that are out-of-adherence with the study protocol. This hints at a very large amount of functionality.
We have had requirements to message both participants and study administrators over the years. For example, very early on we embedded a blog in an app for this purpose. Then we implemented support for remote notifications and topics (rarely used). We often message participants through local notifications based on scheduling information. Now we are talking about showing notifications to administrators about participants being out-of-adherence in an administrative UI.
I would like to add a simple but complete messaging system to Bridge which could be used to persist, deliver, and record the consumption of messages. Features would include:
API would be push-pull (write a message, request a list of active messages for a recipient; push-push is a lot more complicated);
recipients could include individuals or organizations (and other groups once we have them);
each recipient could mark the message as “read” to remove it from their UIs, separate from others, or “resolved” to remove it from everyone’s UIs. They would not be deleted so they are auditable;
messages could be set to expire;
Messages could indicate if they can be read, resolved, or if they expire. For example, an adherence message might only allow you to mark it “resolved” since you don’t want multiple study coordinators all following up with a participant.
Messages could be assigned (change recipients), and indicate a status like “busy” along with the user currently working on the message. Now you have a very simple way to hand off work between team members.
Once pull messaging is working, we could support “push” messaging (where we send a message via email or SMS, or possibly through remote or local push notifications). This introduced a lot of additional complexity, however:
It needs to be integrated with Amazon’s messaging APIs, where we are recording “sent” status. The advantage is that we’d have a record of such messaging, which is part of the study protocol and something that we’ve put together one-off solutions for in the past for scientists;
We’d want to implement messaging preferences, per user/organization and possibly by message type.