Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 28 Next »

Initially we referred to this as compliance, but because “compliance” also has meaning in a governance context, so we’re using “adherence” to describe the measurement of study member participation in the study.

In the v2 science APIs, Bridge has three APIs that are involved in schedule adherence:

  1. The schedule itself, which is finite in length and described by a timeline;

  2. The set of adherence records for the participant which describe what the participant has done in that timeline;

  3. A set of event timestamps that tell us where the participant is at in the timeline (ie what the user should have done and what they should currently be doing in the timeline).

So far the Bridge server provides this information through separate APIs and does not keep track of the time of the participants. For the sake of efficiency, I think we want the Bridge server to combine this information and provide reports on the status of the account, which will be less resource intensive that serving out all the information to be processed by a client or worker process. The issues I see with this:

  1. For the first time the server will need to have knowledge of the participant’s time and time zone;

  2. Because this information depends on the time of the request, it is not very cacheable;

  3. Nevertheless, the reports probably update infrequently (the exact amount depends on many factors), while they may be read frequently in a couple of different formats.

APIs

Method

Path (Under /v5/studies/{studyId}

Description

GET

/participants/{userId}/adherence/eventstream

List of SessionStream reports for one user. The only view that shows scheduling for events the user does not have, this is a detailed view for one user of the whole schedule.

GET

/participants/{userId}/adherence/daterange

Paginated list of SessionsOnDate objects. Date range-based with local dates spanning 14 or less days. Does not show sessions that are not applicable to the user.

GET

/participants/adherence/daterange

A paged list where each record is a set of SessionsOnDate reports for a specific user with 1-7 SessionsOnDate reports. Date-range-based with local dates spanning 7 days or less. Does not show sessions that are not applicable to the user.

The last report is a very large amount of information (basically 7 pages of information per page). I think we’ll need to write the SessionsOnDate reports to a store like Dynamo or MySQL, and regenerate them with a worker process, since it’s such a large amount of information. We may also need to cache the resulting JSON on the web side.

SessionStream Report

Given the designs I have seen so far, I would suggest a set of records that show the sessions to be performed N days after each event, available as a single report that’ll be similar in size to the Timeline. In the array, each record shows one event time stream, and the sessions that should be performed under the “days since event N” key of the entries map. Right now, that day has an array with each session holding its own set of windows, since they are graphed together in designs I have seen (and the completion state is recorded for each window—that’s really the session).

[
  {
    "startEventId":"study_burst:ClinicVisit:01",
    "eventTimestamp":"2021-10-27T19:00:00.000Z",
    "entries":{
      "1":[
        {
          "sessionGuid":"vZBHBVv_H2_1TBbELF48czjS",
          "label": "Session #1",
          "symbol": "circle",
          "timeWindows":[
            {
              "sessionInstanceGuid":"ePcCf6VmfIiVuU0ckdBeRw",
              "timeWindowGuid":"sUaNAasy_LiT3_IYa1Fx_dSv",
              "state":"not_yet_available",
              "type":"SessionStreamWindow"
            },
            {
              "sessionInstanceGuid":"DB13D4mO72j6S-g7PIkI2Q",
              "timeWindowGuid":"Bw6rAAeG6zotqes4cLSgKjh5",
              "state":"not_yet_available",
              "type":"SessionStreamWindow"
            }
          ],
          "type":"SessionStream"
        }
      ],
      "2":[
        {
          "sessionGuid":"vZBHBVv_H2_1TBbELF48czjS",
          "label": "Session #1",
          "symbol": "circle",
          "timeWindows":[
            {
              "sessionInstanceGuid":"wvEV4fJZQ0nfgY-TN2LekA",
              "timeWindowGuid":"sUaNAasy_LiT3_IYa1Fx_dSv",
              "state":"not_yet_available",
              "type":"SessionStreamWindow"
            },
            {
              "sessionInstanceGuid":"IHDTSoj552vGDv1Qt7nXkg",
              "timeWindowGuid":"Bw6rAAeG6zotqes4cLSgKjh5",
              "state":"not_yet_available",
              "type":"SessionStreamWindow"
            }
          ],
          "type":"SessionStream"
        }
      ]
    },
    "type":"SessionStreamReport"
  },
  {
    "startEventId":"study_burst:ClinicVisit:02",
    "eventTimestamp":"2021-11-16T19:00:00.000Z",
    "entries":{
      "1":[
        {
          "sessionGuid":"vZBHBVv_H2_1TBbELF48czjS",
          "label": "Session #1",
          "symbol": "circle",
          "timeWindows":[
            {
              "sessionInstanceGuid":"zk7X4dQCy7Nvnuo2PcnSCA",
              "timeWindowGuid":"sUaNAasy_LiT3_IYa1Fx_dSv",
              "state":"not_yet_available",
              "type":"SessionStreamWindow"
            },
            {
              "sessionInstanceGuid":"rMRne-cbwIN5mkGZLymxzg",
              "timeWindowGuid":"Bw6rAAeG6zotqes4cLSgKjh5",
              "state":"not_yet_available",
              "type":"SessionStreamWindow"
            }
          ],
          "type":"SessionStream"
        }
      ],
      "2":[
        {
          "sessionGuid":"vZBHBVv_H2_1TBbELF48czjS",
          "label": "Session #1",
          "symbol": "circle",          
          "timeWindows":[
            {
              "sessionInstanceGuid":"QXM1cO6yb0gSPWzRwRD8eA",
              "timeWindowGuid":"sUaNAasy_LiT3_IYa1Fx_dSv",
              "state":"not_yet_available",
              "type":"SessionStreamWindow"
            },
            {
              "sessionInstanceGuid":"hCXFevxbBnpaUYjH212dsQ",
              "timeWindowGuid":"Bw6rAAeG6zotqes4cLSgKjh5",
              "state":"not_yet_available",
              "type":"SessionStreamWindow"
            }
          ],
          "type":"SessionStream"
        }
      ]
    },
    "type":"SessionStreamReport"
  }
]

All sessions in the timeline are grouped in this view by the event that triggers them, and then the number of days since that event. All potential events in the schedule are included in this report whether they exist for the user or not (we don’t currently have a way to say “count this part of the schedule if the event exists for the user, but don’t count it if the event doesn’t exist for the user). Then the actual “days since each event” are calculated to determine what the state of each time window is (the window state is a session-level measure of the state, derived from the adherence records of the assessments in that session).

Basically we figure out what they should be doing, and then look at their adherence records to figure out if they’re doing it.

The states are:

State

Description

Adherence

not_yet_available

Participant should not have seen or started this session. It’s in the future.

N/A

unstarted

Participant should see the session (they are being asked to do it now), but they have not started it.

unknown

started

Participant has started the session.

unknown

completed

Participant completed the session before it expired.

compliant

abandoned

Participant started or finished at least one assessment in the session, but there was more work to do and it expired before they finished it.

noncompliant

expired

Participant did not start the session and it is now expired.

noncompliant

I would propose that a participant’s noncompliance percentage is equal to noncompliant / (compliant + noncompliant + unknown). We can then set a threshold at which we would want to intervene (from 0% — any failure gets reported — to something less stringent).

TODO: How do we represent persistent time windows? I think I have said elsewhere that performing it one time = compliant, but that might not be correct.

TODO: There’s nothing here about whether or not specific events should be generated for the user…we’re assuming that the user must do everything in the schedule.

TODO: This view operates on the most recent timestamp for each event. Study bursts generate multiple events and will work with this, but do we have cases where a part of the timeline is supposed to repeat due to an event being updated? For example, if the event is “visited the clinic” and the study expects participants to visit four times over the length of the study, we cannot capture this as an expectation. They’d have to create four separate events for this, they couldn’t just update “clinic_visited” four times.

Study-level paged APIs

These reports are not easily cacheable because the states depend on the current time of the request. A single report takes about 1/3 of a second, so paginated lists of user would be prohibitively expensive to calculate. Let’s assume we have a worker process that creates some form of update (probably of just the active days of the report) and caches it for the list view API. What this means is that the list view will be behind the current state by a day or so. Coordinators could go in and look at individual accounts for more up-to-date information.

In past work I have also created “sweepers” for things like sporting event games that just endlessly loop and refresh caches. Given half a second for each record, for example, all of mPower could be updated every 16 minutes.

Method

Path

GET

/v5/studies/{studyId}/participants/adherence/eventday?offsetBy=&pageSize=

Here is a visual design for this feature…this is all I got on requirements at this point:

Each row represents one participant, presumably seven days around their current day in the study. The information on the left-hand side about week and study burst cannot hold for all the sessions in the row, and I am convinced the only way this view can be assembled is by aligning each user according to date (that is, there is no query to produce this that doesn’t relate it to calendar time, nor is it useful if it’s not related to calendar time since “this week” is implicitly calendrical).

Presumably we will only persist the records we need for this view, and then we can return all records in the paged API. I think one record in this API must be equal to one participant, and it will contain nested information to draw one or more of these rows.

DateAdherenceReport

Here’s one possible report structure for the above (all the sessions and session windows are squished into one day of information, and all the relative information is dropped because that dashboard is about now—it’s date-based). The above dashboard shows seven of these reports for each user, one per day—this is too much so we need another level of summation where we create a weeklong report for each user.

I do not yet understand how we show multiple sessions in the above report…side-by-side icons of different shapes? One row per kind of session? That’ll change this report.

This report can be calculated from the EventDay report.

{
    "studyId": "foo",
    "userId": "BnfcoJLwm95XXkd_Dxjo_w",
    "date": "2021-10-27",
    "sessions": [
        {
            "sessionGuid": "sePT2TAcQ-1ZBY_xeuoA7w0r",
            "label": "Persistent session",
            "symbol": "square",
            "sessionInstanceGuid": "BnfcoJLwm95XXkd_Dxjo_w",
            "state": "completed"
        },
        {
            "sessionGuid": "gDnRq7C1LMeTdT1aCDC4vTOo",
            "label": "Also repeats on enrollment at different cadence",
            "symbol": "triangle",
            "sessionInstanceGuid": "mwHBh8lxaW7zYMMOAhhQKw",
            "state": "started"
        }
    ]
}

Protocol adherence and notifications

There is definitely a lot of possible ways we could measure adherence and I have some questions I have asked above about this. We can probably be pretty flexible on this since at some point, we have to load the whole state of the user’s participation to generate this report. We could then set a threshold for this (ie. warn administrators if there are any non-compliant assessments, vs. setting a proportion threshold that would need to be hit before notifying administrators). This warning apparently applies to specific participants, so it would be generated as part of the worker process described above.

Messaging APIs

One UI design I have seen shows a list of notifications in an administrative UI, indicating specific participants that are out-of-adherence with the study protocol. This hints at a very large amount of functionality.

We have had requirements to message both participants and study administrators over the years. For example, very early on we embedded a blog in an app for this purpose. Then we implemented support for remote notifications and topics (rarely used). We often message participants through local notifications based on scheduling information. Now we are talking about showing notifications to administrators about participants being out-of-adherence in an administrative UI.

I would like to add a simple but complete messaging system to Bridge which could be used to persist, deliver, and record the consumption of messages. Features would include:

  • API would be push-pull (write a message, request a list of active messages for a recipient; push-push is a lot more complicated);

  • recipients could include individuals or organizations (and other groups once we have them);

  • each recipient could mark the message as “read” to remove it from their UIs, separate from others, or “resolved” to remove it from everyone’s UIs. They would not be deleted so they are auditable;

  • messages could be set to expire;

  • Messages could indicate if they can be read, resolved, or if they expire. For example, an adherence message might only allow you to mark it “resolved” since you don’t want multiple study coordinators all following up with a participant.

  • Messages could be assigned (change recipients), and indicate a status like “busy” along with the user currently working on the message. Now you have a very simple way to hand off work between team members.

Once pull messaging is working, we could support “push” messaging (where we send a message via email or SMS, or possibly through remote or local push notifications). This introduced a lot of additional complexity, however:

  • It needs to be integrated with Amazon’s messaging APIs, where we are recording “sent” status. The advantage is that we’d have a record of such messaging, which is part of the study protocol and something that we’ve put together one-off solutions for in the past for scientists;

  • We’d want to implement messaging preferences, per user/organization and possibly by message type.

  • No labels