Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 11 Next »

Initially we referred to this as compliance, but because “compliance” also has meaning in a governance context, we’re using “adherence” to describe the measurement of the extent to which a participant performs all the assessments they are asked to perform as part of their participation in a study.

In the v2 science APIs, Bridge has three APIs that are involved in schedule adherence:

  1. The schedule itself, which is finite in length and described by a timeline;

  2. The set of adherence records for the participant which describe what the participant has done in that timeline;

  3. A set of event timestamps that tell us where the participant is at in the timeline (ie what the user should have done and what they should currently be doing in the timeline).

So far the Bridge server provides this information through separate APIs and does not keep track of the time of the participants. For the sake of efficiency, I think we want the Bridge server to combine this information and provide reports on the status of the account, which will be less resource intensive that serving out all the information to be processed by a client or worker process. The issues I see with this:

  1. For the first time the server will need to have knowledge of the participant’s time and time zone;

  2. The reports are (relatively) expensive to calculate and they are write seldom, read frequently in nature, so they will need to be cached.

API (Event Day Adherence Report)

Given the designs I have seen so far, I created the “event day” report which would be available via a single API:

Method

Path

Description

GET

/v5/studies/{studyId}/participants/{userId|self}/adherence/eventday?activeOnly=true|false

activeOnly = show only currently available days. This can be expanded to show days before or after, etc. as useful to clients.

The JSON looks as follows (note that we can add whatever further information we need based on the UI— though ultimately, it may make better sense to look up session and assessment metadata in the timeline). The existing designs are currently minimal, with all windows just being represented by a symbol colored to show its current adherence state:

[
  {
    "startEventId":"study_burst:ClinicVisit:01",
    "eventTimestamp":"2021-10-27T19:00:00.000Z",
    "entries":{
      "1":[
        {
          "sessionGuid":"vZBHBVv_H2_1TBbELF48czjS",
          "label": "Session #1",
          "symbol": "circle",
          "timeWindows":[
            {
              "sessionInstanceGuid":"ePcCf6VmfIiVuU0ckdBeRw",
              "timeWindowGuid":"sUaNAasy_LiT3_IYa1Fx_dSv",
              "state":"not_yet_available",
              "type":"EventDayWindow"
            },
            {
              "sessionInstanceGuid":"DB13D4mO72j6S-g7PIkI2Q",
              "timeWindowGuid":"Bw6rAAeG6zotqes4cLSgKjh5",
              "state":"not_yet_available",
              "type":"EventDayWindow"
            }
          ],
          "type":"EventDay"
        }
      ],
      "2":[
        {
          "sessionGuid":"vZBHBVv_H2_1TBbELF48czjS",
          "label": "Session #1",
          "symbol": "circle",
          "timeWindows":[
            {
              "sessionInstanceGuid":"wvEV4fJZQ0nfgY-TN2LekA",
              "timeWindowGuid":"sUaNAasy_LiT3_IYa1Fx_dSv",
              "state":"not_yet_available",
              "type":"EventDayWindow"
            },
            {
              "sessionInstanceGuid":"IHDTSoj552vGDv1Qt7nXkg",
              "timeWindowGuid":"Bw6rAAeG6zotqes4cLSgKjh5",
              "state":"not_yet_available",
              "type":"EventDayWindow"
            }
          ],
          "type":"EventDay"
        }
      ]
    },
    "type":"EventDayAdherenceReport"
  },
  {
    "startEventId":"study_burst:ClinicVisit:02",
    "eventTimestamp":"2021-11-16T19:00:00.000Z",
    "entries":{
      "1":[
        {
          "sessionGuid":"vZBHBVv_H2_1TBbELF48czjS",
          "label": "Session #1",
          "symbol": "circle",
          "timeWindows":[
            {
              "sessionInstanceGuid":"zk7X4dQCy7Nvnuo2PcnSCA",
              "timeWindowGuid":"sUaNAasy_LiT3_IYa1Fx_dSv",
              "state":"not_yet_available",
              "type":"EventDayWindow"
            },
            {
              "sessionInstanceGuid":"rMRne-cbwIN5mkGZLymxzg",
              "timeWindowGuid":"Bw6rAAeG6zotqes4cLSgKjh5",
              "state":"not_yet_available",
              "type":"EventDayWindow"
            }
          ],
          "type":"EventDay"
        }
      ],
      "2":[
        {
          "sessionGuid":"vZBHBVv_H2_1TBbELF48czjS",
          "label": "Session #1",
          "symbol": "circle",          
          "timeWindows":[
            {
              "sessionInstanceGuid":"QXM1cO6yb0gSPWzRwRD8eA",
              "timeWindowGuid":"sUaNAasy_LiT3_IYa1Fx_dSv",
              "state":"not_yet_available",
              "type":"EventDayWindow"
            },
            {
              "sessionInstanceGuid":"hCXFevxbBnpaUYjH212dsQ",
              "timeWindowGuid":"Bw6rAAeG6zotqes4cLSgKjh5",
              "state":"not_yet_available",
              "type":"EventDayWindow"
            }
          ],
          "type":"EventDay"
        }
      ]
    },
    "type":"EventDayAdherenceReport"
  }
]

All sessions in the timeline are grouped in this view by the event that triggers them, and then the number of days since that event. All potential events in the schedule are included in this report whether they exist for the user or not (we don’t currently have a way to say “count this part of the schedule if the event exist for the user, but don’t count it if the event doesn’t exist for the user). Then the actual “days since each event” are calculated to determine what the state of these assessments. The states are:

State

Description

Adherence

not_yet_available

Participant should not have seen or started this assessment. It’s in the future.

N/A

unstarted

Participant should see the assessment (they are being asked to do it now), but they have not started it.

unknown

started

Participant has started the assessment.

unknown

completed

Participant has completed the assessment before it expired.

compliant

abandoned

Participant started the assessment but it expired before they finished it.

noncompliant

expired

Participant did not start the assessment and it is now expired.

noncompliant

I would propose that a participant’s noncompliance percentage is equal to noncompliant / (compliant + noncompliant + unknown). We can then set a threshold at which we would want to intervene (from 0% — any failure gets reported — to something less stringent).

TODO: How do we represent persistent time windows? I think I have said elsewhere that performing it one time = compliant, but that might not be correct.

TODO: There’s nothing here about whether or not specific events should be generated for the user…we’re assuming that the user must do everything in the schedule.

TODO: This view operates on the most recent timestamp for each event. Study bursts generate multiple events and will work with this, but do we have cases where a part of the timeline is supposed to repeat due to an event being updated? For example, if the event is “visited the clinic” and the study expects participants to visit four times over the length of the study, we cannot capture this as an expectation. They’d have to create four separate events for this, they couldn’t just update “clinic_visited” or some such.

Caching individual participant adherence reports

These reports are not easily cacheable because the states depend entirely on the current time of the request. A single report takes about 1/3 of a second, but paginated lists of user would be prohibitively expensive to calculate. We may need to calculate these nightly and store them with an additional timestamp, so they can be retrieved in paginated form. As a result, this higher-level view or any reports generated from it will not be “real time,” but individuals could be explored or refreshed to see more up-to-date information.

In past work I have created “sweepers” for things like sporting event games that just endlessly loop and refresh caches. This allows you to balance freshness of the data with resource utilization.

This probably means that we will have a flag on App to enable this nightly report generation for specific apps. We have reports that count uploads and sign ins, but we hard-coded exclusion lists so they don’t run in most of our Apps. We should be able to toggle these reports through the Bridge APIs.

Protocol adherence

There is definitely a lot of possible ways we could measure adherence. However it can be measured as a proportion of compliant to non-compliant scheduled assessment states. We could then set a threshold for this (ie. warn administrators if there are any non-compliant assessments, vs. setting a proportion threshold that would need to be hit before notifying administrators). This can be calculated as part of the event day adherence report.

Study-level summaries

On an initial timeline_retrieved and thereafter when the adherence records or events are updated, we could calculate and store the users event day adherence record. I do wonder about putting this on a queue and having the worker process make the request so it doesn’t slow down the request. The goal would be to expose a paginated API of all the active records (unstarted, started, and compliant) for all users.

Arm and study summaries

Not sure what the requirements are for these, but worker processes would periodically (or when prompted by a message) calculate and store these. The logical place would be study reports (but again a SQL table is an option).

Messaging APIs

One UI design I have seen shows a list of notifications in an administrative UI, indicating specific participants that are out-of-adherence with the study protocol. This hints at a very large amount of functionality.

We have had requirements to message both participants and study administrators over the years. For example, very early on we embedded a blog in an app for this purpose. Then we implemented support for remote notifications and topics (rarely used). We often message participants through local notifications based on scheduling information. Now we are talking about showing notifications to administrators about participants being out-of-adherence in an administrative UI.

I would like to add a simple but complete messaging system to Bridge which could be used to persist, deliver, and record the consumption of messages. Features would include:

  • API would be push-pull (write a message, request a list of active messages for a recipient; push-push is a lot more complicated);

  • recipients could include individuals or organizations (and other groups once we have them);

  • each recipient could mark the message as “read” to remove it from their UIs, separate from others, or “resolved” to remove it from everyone’s UIs. They would not be deleted so they are auditable;

  • messages could be set to expire;

  • Messages could indicate if they can be read, resolved, or if they expire. For example, an adherence message might only allow you to mark it “resolved” since you don’t want multiple study coordinators all following up with a participant.

  • Messages could be assigned (change recipients), and indicate a status like “busy” along with the user currently working on the message. Now you have a very simple way to hand off work between team members.

Once pull messaging is working, we could support “push” messaging (where we send a message via email or SMS, or possibly through remote or local push notifications). This introduced a lot of additional complexity, however:

  • It needs to be integrated with Amazon’s messaging APIs, where we are recording “sent” status. The advantage is that we’d have a record of such messaging, which is part of the study protocol and something that we’ve put together one-off solutions for in the past for scientists;

  • We’d want to implement messaging preferences, per user/organization and possibly by message type.

  • No labels