Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Initially we referred to this as compliance, but because “compliance” also has meaning in a governance context, so we’re using “adherence” to describe the measurement of study member participation in the study.

Table of Contents
minLevel1
maxLevel7

In the v2 science APIs, Bridge has three APIs that are involved in schedule adherence:

  1. The schedule itself, which is finite in length and described by a timeline;

  2. The set of adherence records for the participant which describe what the participant has done in that timeline;

  3. A set of event timestamps that tell us where the participant is at in the timeline (ie what the user should have done and what they should currently be doing in the timeline).

So far the Bridge server provides this information through separate APIs and does not keep track of the time of the participants. For the sake of efficiency, I think we want the Bridge server to combine this information and provide reports on the status of the account, which will be less resource intensive that serving out all the information to be processed by a client or worker process. The issues I see with this:

  1. For the first time the server will need to have knowledge of the participant’s time and time zone;

  2. Because this information depends on the time of the request, it is not very cacheable;

  3. Nevertheless, the reports probably update infrequently (the exact amount depends on many factors), while they may be read frequently in a couple of different formats.

APIs

Method

Path (Under /v5/studies/{studyId}

Description

GET

/participants/{userId}/adherence/eventstream

List of SessionStream reports for one user. The only view that shows scheduling for events the user does not have, this is a detailed view for one user of the whole schedule.

GET

/participants/{userId}/adherence/daterange

Paginated list of SessionsOnDate objects. Date range-based with local dates spanning 14 or less days. Does not show sessions that are not applicable to the user.

GET

/participants/adherence/daterange

A paged list where each record is a set of SessionsOnDate reports for a specific user with 1-7 SessionsOnDate reports. Date-range-based with local dates spanning 7 days or less. Does not show sessions that are not applicable to the user.

The last report is a very large amount of information (basically 7 pages of information per page). I think we’ll need to write the SessionsOnDate reports to a store like Dynamo or MySQL, and regenerate them with a worker process, since it’s such a large amount of information. We may also need to cache the resulting JSON on the web side.

SessionStream Report

Given the designs I have seen so far, I would suggest a set of records that show the sessions to be performed N days after each event, available as a single report that’ll be similar in size to the Timeline.

All sessions in the timeline are grouped in this view by the event that triggers them, and then the number of days since that event. All potential events in the schedule are included in this report whether they exist for the user or not. Then the actual “days since each event” are calculated to determine what the state of each time window is (states are described below the JSON sample):

Code Block
languagejson
[
  {
    "startEventId":"study_burst:ClinicVisit:01",
    "eventTimestamp":"2021-10-27T19:00:00.000Z",
    "entries":{
      "1":[
        {
          "sessionGuid":"vZBHBVv_H2_1TBbELF48czjS",
          "label": "Session #1",
          "symbol": "circle",
          "timeWindows":[
            {
              "sessionInstanceGuid":"ePcCf6VmfIiVuU0ckdBeRw",
              "timeWindowGuid":"sUaNAasy_LiT3_IYa1Fx_dSv",
              "state":"not_yet_available",
              "type":"SessionStreamWindow"
            },
            {
              "sessionInstanceGuid":"DB13D4mO72j6S-g7PIkI2Q",
              "timeWindowGuid":"Bw6rAAeG6zotqes4cLSgKjh5",
              "state":"not_yet_available",
              "type":"SessionStreamWindow"
            }
          ],
          "type":"SessionStream"
        }
      ],
      "2":[
        {
          "sessionGuid":"vZBHBVv_H2_1TBbELF48czjS",
          "label": "Session #1",
          "symbol": "circle",
          "timeWindows":[
            {
              "sessionInstanceGuid":"wvEV4fJZQ0nfgY-TN2LekA",
              "timeWindowGuid":"sUaNAasy_LiT3_IYa1Fx_dSv",
              "state":"not_yet_available",
              "type":"SessionStreamWindow"
            },
            {
              "sessionInstanceGuid":"IHDTSoj552vGDv1Qt7nXkg",
              "timeWindowGuid":"Bw6rAAeG6zotqes4cLSgKjh5",
              "state":"not_yet_available",
              "type":"SessionStreamWindow"
            }
          ],
          "type":"SessionStream"
        }
      ]
    },
    "type":"SessionStreamReport"
  },
  {
    "startEventId":"study_burst:ClinicVisit:02",
    "eventTimestamp":"2021-11-16T19:00:00.000Z",
    "entries":{
      "1":[
        {
          "sessionGuid":"vZBHBVv_H2_1TBbELF48czjS",
          "label": "Session #1",
          "symbol": "circle",
          "timeWindows":[
            {
              "sessionInstanceGuid":"zk7X4dQCy7Nvnuo2PcnSCA",
              "timeWindowGuid":"sUaNAasy_LiT3_IYa1Fx_dSv",
              "state":"not_yet_available",
              "type":"SessionStreamWindow"
            },
            {
              "sessionInstanceGuid":"rMRne-cbwIN5mkGZLymxzg",
              "timeWindowGuid":"Bw6rAAeG6zotqes4cLSgKjh5",
              "state":"not_yet_available",
              "type":"SessionStreamWindow"
            }
          ],
          "type":"SessionStream"
        }
      ],
      "2":[
        {
          "sessionGuid":"vZBHBVv_H2_1TBbELF48czjS",
          "label": "Session #1",
          "symbol": "circle",          
          "timeWindows":[
            {
              "sessionInstanceGuid":"QXM1cO6yb0gSPWzRwRD8eA",
              "timeWindowGuid":"sUaNAasy_LiT3_IYa1Fx_dSv",
              "state":"not_yet_available",
              "type":"SessionStreamWindow"
            },
            {
              "sessionInstanceGuid":"hCXFevxbBnpaUYjH212dsQ",
              "timeWindowGuid":"Bw6rAAeG6zotqes4cLSgKjh5",
              "state":"not_yet_available",
              "type":"SessionStreamWindow"
            }
          ],
          "type":"SessionStream"
        }
      ]
    },
    "type":"SessionStreamReport"
  }
]

The states are:

State

Description

Adherence

not_applicable

Participant does not have this event in their events, so these sessions will not currently ever be shown to the participant.

N/A

not_yet_available

Participant should not have seen or started this session. It’s in the future.

N/A

unstarted

Participant should see the session (they are being asked to do it now), but they have not started it.

unknown

started

Participant has started the session.

unknown

completed

Participant completed the session before it expired.

compliant

abandoned

Participant started or finished at least one assessment in the session, but there was more work to do and it expired before they finished it.

noncompliant

expired

Participant did not start the session and it is now expired.

noncompliant

I would propose that a participant’s noncompliance percentage is equal to noncompliant / (compliant + noncompliant + unknown). We can then set a threshold at which we would want to intervene (from 0% — any failure gets reported — to something less stringent).

Persistent time windows will be excluded from all adherence reports. Completing assessments that are part of a persistent window do not count for or against adherence.

All of these views operate on the most recent timestamps for all events. Building schedules that rely on a mutable event changing, and triggering a new timeline of sessions to perform, will not work with these adherence APIs. That would be events like “session type X has been completed.” OTOH, it will show compliance with the most recent time stream, and that might be all that matters. Past time streams are no longer actionable.

SessionsOnDate Report

This is a date-range list API that returns a set of SessionsOnDate objects.

Code Block
public class SessionsOnDate {
  String userId;
  LocalDate date;
  List<SessionOnDate> sessions;
}

This is a flat report that lists all sessions that have a startDate/endDate that overlaps with the date of the record (remember that a session really exists for each window in each session in a schedule). In the database, this is a set of records that will be grouped. For this reason, the date range is low: 1-7 days only. The database SessionOnDate table would look something like this:

Code Block
CREATE TABLE `SessionOnDate` (
  `appId`,
  `studyId`,
  `userId`,
  `sessionInstanceGuid`,
  `timeWindowGuid`,
  `startDate`,
  `endDate`,
  `state`,
  `label`,
  `symbol`,
  PRIMARY KEY (`studyId`, `userId`, `sessionInstanceGuid`, `date`),
  CONSTRAINT `SessionsOnDate-Study-Constraint` FOREIGN KEY (`appId`, `studyId`) REFERENCES `Substudies` (`id`, `studyId`) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;

Here is a visual design for this feature…this is all I got on requirements at this point:

Each row represents one participant, presumably seven days around their current day in the study. The information on the left-hand side about week and study burst cannot hold for all the sessions in the row, and I am convinced the only way this view can be assembled is by aligning each user according to date (that is, there is no query to produce this that doesn’t relate it to calendar time, nor is it useful if it’s not related to calendar time since “this week” is implicitly calendrical).

Presumably we will only persist the records we need for this view, and then we can return all records in the paged API. I think one record in this API must be equal to one participant, and it will contain nested information to draw one or more of these rows.

DateAdherenceReport

Here’s one possible report structure for the above (all the sessions and session windows are squished into one day of information, and all the relative information is dropped because that dashboard is about now—it’s date-based). The above dashboard shows seven of these reports for each user, one per day—this is too much so we need another level of summation where we create a weeklong report for each user.

I do not yet understand how we show multiple sessions in the above report…side-by-side icons of different shapes? One row per kind of session? That’ll change this report.

This report can be calculated from the EventDay report.

Code Block
languagejson
{
    "studyId": "foo",
    "userId": "BnfcoJLwm95XXkd_Dxjo_w",
    "date": "2021-10-27",
    "sessions": [
        {
            "sessionGuid": "sePT2TAcQ-1ZBY_xeuoA7w0r",
            "label": "Persistent session",
            "symbol": "square",
            "sessionInstanceGuid": "BnfcoJLwm95XXkd_Dxjo_w",
            "state": "completed"
        },
        {
            "sessionGuid": "gDnRq7C1LMeTdT1aCDC4vTOo",
            "label": "Also repeats on enrollment at different cadence",
            "symbol": "triangle",
            "sessionInstanceGuid": "mwHBh8lxaW7zYMMOAhhQKw",
            "state": "started"
        }
    ]
}

Protocol adherence and notifications

There is definitely a lot of possible ways we could measure adherence and I have some questions I have asked above about this. We can probably be pretty flexible on this since at some point, we have to load the whole state of the user’s participation to generate this report. We could then set a threshold for this (ie. warn administrators if there are any non-compliant assessments, vs. setting a proportion threshold that would need to be hit before notifying administrators). This warning apparently applies to specific participants, so it would be generated as part of the worker process described above.

Messaging APIs

One UI design I have seen shows a list of notifications in an administrative UI, indicating specific participants that are out-of-adherence with the study protocol. This hints at a very large amount of functionality.

We have had requirements to message both participants and study administrators over the years. For example, very early on we embedded a blog in an app for this purpose. Then we implemented support for remote notifications and topics (rarely used). We often message participants through local notifications based on scheduling information. Now we are talking about showing notifications to administrators about participants being out-of-adherence in an administrative UI.

I would like to add a simple but complete messaging system to Bridge which could be used to persist, deliver, and record the consumption of messages. Features would include:

  • API would be push-pull (write a message, request a list of active messages for a recipient; push-push is a lot more complicated);

  • recipients could include individuals or organizations (and other groups once we have them);

  • each recipient could mark the message as “read” to remove it from their UIs, separate from others, or “resolved” to remove it from everyone’s UIs. They would not be deleted so they are auditable;

  • messages could be set to expire;

  • Messages could indicate if they can be read, resolved, or if they expire. For example, an adherence message might only allow you to mark it “resolved” since you don’t want multiple study coordinators all following up with a participant.

  • Messages could be assigned (change recipients), and indicate a status like “busy” along with the user currently working on the message. Now you have a very simple way to hand off work between team members.

Once pull messaging is working, we could support “push” messaging (where we send a message via email or SMS, or possibly through remote or local push notifications). This introduced a lot of additional complexity, however:

  • It needs to be integrated with Amazon’s messaging APIs, where we are recording “sent” status. The advantage is that we’d have a record of such messaging, which is part of the study protocol and something that we’ve put together one-off solutions for in the past for scientists;

  • We’d want to implement messaging preferences, per user/organization and possibly by message type.