Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 26 Next »

Initially we referred to this as compliance, but because “compliance” also has meaning in a governance context, we’re using “adherence” to describe the measurement of the extent to which a participant performs all the assessments they are asked to perform as part of their participation in a study.

In the v2 science APIs, Bridge has three APIs that are involved in schedule adherence:

  1. The schedule itself, which is finite in length and described by a timeline;

  2. The set of adherence records for the participant which describe what the participant has done in that timeline;

  3. A set of event timestamps that tell us where the participant is at in the timeline (ie what the user should have done and what they should currently be doing in the timeline).

So far the Bridge server provides this information through separate APIs and does not keep track of the time of the participants. For the sake of efficiency, I think we want the Bridge server to combine this information and provide reports on the status of the account, which will be less resource intensive that serving out all the information to be processed by a client or worker process. The issues I see with this:

  1. For the first time the server will need to have knowledge of the participant’s time and time zone;

  2. Because this information depends on the time of the request, it is not very cacheable;

  3. Nevertheless, the reports probably update infrequently (the exact amount depends on many factors), while they may be read frequently in a couple of different formats.

Event Day Adherence Report API

Given the designs I have seen so far, I would suggest an “event by day” report which would be available via a single API:

Method

Path

Description

GET

/v5/studies/{studyId}/participants/{userId}/adherence/eventday
?activeOnly=true|false

activeOnly = show only currently available days. This can be expanded to show days before or after, etc. as useful to clients.

The JSON looks as follows (note that we can add whatever further information we need based on the UI— though ultimately, it may make better sense to look up session and assessment metadata in the timeline). The existing designs are currently minimal, with all windows just being represented by a symbol colored to show its current adherence state:

[
  {
    "startEventId":"study_burst:ClinicVisit:01",
    "eventTimestamp":"2021-10-27T19:00:00.000Z",
    "entries":{
      "1":[
        {
          "sessionGuid":"vZBHBVv_H2_1TBbELF48czjS",
          "label": "Session #1",
          "symbol": "circle",
          "timeWindows":[
            {
              "sessionInstanceGuid":"ePcCf6VmfIiVuU0ckdBeRw",
              "timeWindowGuid":"sUaNAasy_LiT3_IYa1Fx_dSv",
              "state":"not_yet_available",
              "type":"EventDayWindow"
            },
            {
              "sessionInstanceGuid":"DB13D4mO72j6S-g7PIkI2Q",
              "timeWindowGuid":"Bw6rAAeG6zotqes4cLSgKjh5",
              "state":"not_yet_available",
              "type":"EventDayWindow"
            }
          ],
          "type":"EventDay"
        }
      ],
      "2":[
        {
          "sessionGuid":"vZBHBVv_H2_1TBbELF48czjS",
          "label": "Session #1",
          "symbol": "circle",
          "timeWindows":[
            {
              "sessionInstanceGuid":"wvEV4fJZQ0nfgY-TN2LekA",
              "timeWindowGuid":"sUaNAasy_LiT3_IYa1Fx_dSv",
              "state":"not_yet_available",
              "type":"EventDayWindow"
            },
            {
              "sessionInstanceGuid":"IHDTSoj552vGDv1Qt7nXkg",
              "timeWindowGuid":"Bw6rAAeG6zotqes4cLSgKjh5",
              "state":"not_yet_available",
              "type":"EventDayWindow"
            }
          ],
          "type":"EventDay"
        }
      ]
    },
    "type":"EventDayAdherenceReport"
  },
  {
    "startEventId":"study_burst:ClinicVisit:02",
    "eventTimestamp":"2021-11-16T19:00:00.000Z",
    "entries":{
      "1":[
        {
          "sessionGuid":"vZBHBVv_H2_1TBbELF48czjS",
          "label": "Session #1",
          "symbol": "circle",
          "timeWindows":[
            {
              "sessionInstanceGuid":"zk7X4dQCy7Nvnuo2PcnSCA",
              "timeWindowGuid":"sUaNAasy_LiT3_IYa1Fx_dSv",
              "state":"not_yet_available",
              "type":"EventDayWindow"
            },
            {
              "sessionInstanceGuid":"rMRne-cbwIN5mkGZLymxzg",
              "timeWindowGuid":"Bw6rAAeG6zotqes4cLSgKjh5",
              "state":"not_yet_available",
              "type":"EventDayWindow"
            }
          ],
          "type":"EventDay"
        }
      ],
      "2":[
        {
          "sessionGuid":"vZBHBVv_H2_1TBbELF48czjS",
          "label": "Session #1",
          "symbol": "circle",          
          "timeWindows":[
            {
              "sessionInstanceGuid":"QXM1cO6yb0gSPWzRwRD8eA",
              "timeWindowGuid":"sUaNAasy_LiT3_IYa1Fx_dSv",
              "state":"not_yet_available",
              "type":"EventDayWindow"
            },
            {
              "sessionInstanceGuid":"hCXFevxbBnpaUYjH212dsQ",
              "timeWindowGuid":"Bw6rAAeG6zotqes4cLSgKjh5",
              "state":"not_yet_available",
              "type":"EventDayWindow"
            }
          ],
          "type":"EventDay"
        }
      ]
    },
    "type":"EventDayAdherenceReport"
  }
]

All sessions in the timeline are grouped in this view by the event that triggers them, and then the number of days since that event. All potential events in the schedule are included in this report whether they exist for the user or not (we don’t currently have a way to say “count this part of the schedule if the event exists for the user, but don’t count it if the event doesn’t exist for the user). Then the actual “days since each event” are calculated to determine what the state of each time window is (the window state is a session-level measure of the state, derived from the adherence records of the assessments in that session).

Basically we figure out what they should be doing, and then look at their adherence records to figure out if they’re doing it.

The states are:

State

Description

Adherence

not_yet_available

Participant should not have seen or started this session. It’s in the future.

N/A

unstarted

Participant should see the session (they are being asked to do it now), but they have not started it.

unknown

started

Participant has started the session.

unknown

completed

Participant completed the session before it expired.

compliant

abandoned

Participant started or finished at least one assessment in the session, but there was more work to do and it expired before they finished it.

noncompliant

expired

Participant did not start the session and it is now expired.

noncompliant

I would propose that a participant’s noncompliance percentage is equal to noncompliant / (compliant + noncompliant + unknown). We can then set a threshold at which we would want to intervene (from 0% — any failure gets reported — to something less stringent).

TODO: How do we represent persistent time windows? I think I have said elsewhere that performing it one time = compliant, but that might not be correct.

TODO: There’s nothing here about whether or not specific events should be generated for the user…we’re assuming that the user must do everything in the schedule.

TODO: This view operates on the most recent timestamp for each event. Study bursts generate multiple events and will work with this, but do we have cases where a part of the timeline is supposed to repeat due to an event being updated? For example, if the event is “visited the clinic” and the study expects participants to visit four times over the length of the study, we cannot capture this as an expectation. They’d have to create four separate events for this, they couldn’t just update “clinic_visited” four times.

Study-level paged APIs

These reports are not easily cacheable because the states depend on the current time of the request. A single report takes about 1/3 of a second, so paginated lists of user would be prohibitively expensive to calculate. Let’s assume we have a worker process that creates some form of update (probably of just the active days of the report) and caches it for the list view API. What this means is that the list view will be behind the current state by a day or so. Coordinators could go in and look at individual accounts for more up-to-date information.

In past work I have also created “sweepers” for things like sporting event games that just endlessly loop and refresh caches. Given half a second for each record, for example, all of mPower could be updated every 16 minutes.

Method

Path

GET

/v5/studies/{studyId}/participants/adherence/eventday?offsetBy=&pageSize=

Here is a visual design for this feature…this is all I got on requirements at this point:

Each row represents one participant, presumably seven days around their current day in the study. Some questions to answer about this:

  • Is this the image of the most recent study burst for each user? If so, day 1 is the day 0 of that burst, not just “three days ago.” If so, what if the burst is longer than 7 days, what if the schedule does not use study bursts?

  • If the user is in a “fallow” period of the schedule (like a period of time between study bursts), what do we return?

I would be inclined to simply show X days before and after “now” for the user, across all event streams. That means there could be more than one row of symbols in a row, I think (or maybe the different sessions are just shown side-by-side).

Presumably we will only persist the records we need for this view, and then we can return all records in the paged API. I think one record in this API must be equal to one participant, and it will contain nested information to draw one or more of these rows.

DateAdherenceReport

Here’s one possible report structure for the above (all the sessions and session windows are squished into one day of information, and all the relative information is dropped because that dashboard is about now—it’s date-based). The above dashboard shows seven of these reports for each user, one per day—this is too much so we need another level of summation where we create a weeklong report for each user.

I do not yet understand how we show multiple sessions in the above report…side-by-side icons of different shapes? One row per kind of session? That’ll change this report.

This report can be calculated from the EventDay report.

{
    "studyId": "foo",
    "userId": "BnfcoJLwm95XXkd_Dxjo_w",
    "date": "2021-10-27",
    "sessions": [
        {
            "sessionGuid": "sePT2TAcQ-1ZBY_xeuoA7w0r",
            "label": "Persistent session",
            "symbol": "square",
            "sessionInstanceGuid": "BnfcoJLwm95XXkd_Dxjo_w",
            "state": "completed"
        },
        {
            "sessionGuid": "gDnRq7C1LMeTdT1aCDC4vTOo",
            "label": "Also repeats on enrollment at different cadence",
            "symbol": "triangle",
            "sessionInstanceGuid": "mwHBh8lxaW7zYMMOAhhQKw",
            "state": "started"
        }
    ]
}

Protocol adherence and notifications

There is definitely a lot of possible ways we could measure adherence and I have some questions I have asked above about this. We can probably be pretty flexible on this since at some point, we have to load the whole state of the user’s participation to generate this report. We could then set a threshold for this (ie. warn administrators if there are any non-compliant assessments, vs. setting a proportion threshold that would need to be hit before notifying administrators). This warning apparently applies to specific participants, so it would be generated as part of the worker process described above.

Messaging APIs

One UI design I have seen shows a list of notifications in an administrative UI, indicating specific participants that are out-of-adherence with the study protocol. This hints at a very large amount of functionality.

We have had requirements to message both participants and study administrators over the years. For example, very early on we embedded a blog in an app for this purpose. Then we implemented support for remote notifications and topics (rarely used). We often message participants through local notifications based on scheduling information. Now we are talking about showing notifications to administrators about participants being out-of-adherence in an administrative UI.

I would like to add a simple but complete messaging system to Bridge which could be used to persist, deliver, and record the consumption of messages. Features would include:

  • API would be push-pull (write a message, request a list of active messages for a recipient; push-push is a lot more complicated);

  • recipients could include individuals or organizations (and other groups once we have them);

  • each recipient could mark the message as “read” to remove it from their UIs, separate from others, or “resolved” to remove it from everyone’s UIs. They would not be deleted so they are auditable;

  • messages could be set to expire;

  • Messages could indicate if they can be read, resolved, or if they expire. For example, an adherence message might only allow you to mark it “resolved” since you don’t want multiple study coordinators all following up with a participant.

  • Messages could be assigned (change recipients), and indicate a status like “busy” along with the user currently working on the message. Now you have a very simple way to hand off work between team members.

Once pull messaging is working, we could support “push” messaging (where we send a message via email or SMS, or possibly through remote or local push notifications). This introduced a lot of additional complexity, however:

  • It needs to be integrated with Amazon’s messaging APIs, where we are recording “sent” status. The advantage is that we’d have a record of such messaging, which is part of the study protocol and something that we’ve put together one-off solutions for in the past for scientists;

  • We’d want to implement messaging preferences, per user/organization and possibly by message type.

  • No labels