Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

Initially we referred to this as compliance, but because “compliance” also has meaning in a governance context, we’re using “adherence” to describe the measurement of the extent to which a participant performs all the assessments they are asked to perform as part of their participation in a study.

In the v2 science APIs, Bridge has three APIs that are involved in schedule adherence:

  1. The schedule itself, which is finite in length and described by a timeline;

  2. The set of adherence records for the participant which describe in some detail what the participant has done in that timeline;

  3. A set of event timestamps that orient the participant in the timeline and tell us where they should be in terms of performing the assessments of the study.

So far the Bridge server provides this information through separate APIs and does not keep track of the time of the participants. For the sake of efficiency, I think we want the Bridge server to combine this information and provide reports on the status of the account, which will be less resource intensive that serving out all the information to be processed by a client or worker process. The issues I see with this:

  1. For the first time the server will need to have knowledge of the participant’s time and time zone;

  2. The reports are (relatively) expensive to calculate, in bulk they are not likely to change frequently, and they will need to be the building block for arm and study summary reports, so they will be read often, write seldom and will need to be aggressively cached.

Adherence Event Day Report
Basic structure of adherence record (per participant) is:

  • durationInDays (the highest endDay value calculated by the timeline, which can be lower that the schedule duration, but the schedule duration will cap any open-ended sequences);

  • Enum studyAdherence (compliant, not in compliance, some measure of this);

  • Map<Stream,List<SessionAdherence>> adherenceTimeline;

There’s nothing in this about event sequences that haven’t happened because they haven’t been triggered by an event. We might need to encode, in a schedule, what is desired in terms of event generation.

There’s nothing about individual sessions or assessments being optional. There’s no easy way to determine adherence for persistent time windows, so we’ll assume that performing such a window one time is compliant…maybe.

Stream

The timeline describes parts of scheduling that can be repeated, if the triggering events can change over time. Study bursts are also an example of repeating parts of scheduling. Each of these things represents a separate stream.

  • label (the session name, probably)

  • startEventId

  • eventTimestamp

  • session window GUID (each time window generates one stream)

SessionAdherence (might be possible to edit the state if coordinator knows state better). In visual designs, all time windows are grouped under each session (in our server model, each window creates a separate stream of activities…but we would want a set of tests). So we might have a set of WindowAdherence records, as below, in one session adherence container (not sure there’s much in the container).

  • state
    - not yet available NOT_YET_AVAILABLE
    - available but unstarted UNSTARTED
    - started STARTED
    - successfully completed COMPLETED
    - expired unfinished ABANDONED
    - expired never started IGNORED/NONCOMPLIANT

  • instanceGuid

  • sessionGuid

  • startDay

  • endDay

  • startTime

  • expiration

SessionAdherence needs to boil down the completed state of the session based on the timestamps, I think. User would be out of compliance if some of their states are expired. Should probably be possible to calculate these records and then store them so Bridge can deliver them. So a summary of the state of the record should probably also be saved for efficient querying.

API

Method

Path

GET

/v5/studies/{studyId}/participants/{userId|self}/adherence/eventday

GET

/v5/studies/{studyId}/participants/adherence/eventday

Given Lynn’s designs, here’s what I think the JSON would look like. (I have prototyped this; it’ll work though it involves loading everything about the user.) The single biggest thing about her designs is that she regroups the time window event streams, into each encompassing session instance. Also note that in the view below, we return the entire timeline for the user, but it is possible to calculate the active components or adherence percentages from the time window completion states. Finally, this view assumes current timestamps only; such a view could be constructed for all timestamps, and I don’t know how that would be organized.

[
  {
    "startEventId":"study_burst:ClinicVisit:01",
    "eventTimestamp":"2021-10-27T19:00:00.000Z",
    "entries":{
      "1":[
        {
          "sessionGuid":"vZBHBVv_H2_1TBbELF48czjS",
          "timeWindows":[
            {
              "sessionInstanceGuid":"ePcCf6VmfIiVuU0ckdBeRw",
              "timeWindowGuid":"sUaNAasy_LiT3_IYa1Fx_dSv",
              "state":"not_yet_available",
              "type":"EventDayWindow"
            },
            {
              "sessionInstanceGuid":"DB13D4mO72j6S-g7PIkI2Q",
              "timeWindowGuid":"Bw6rAAeG6zotqes4cLSgKjh5",
              "state":"not_yet_available",
              "type":"EventDayWindow"
            }
          ],
          "type":"EventDay"
        }
      ],
      "2":[
        {
          "sessionGuid":"vZBHBVv_H2_1TBbELF48czjS",
          "timeWindows":[
            {
              "sessionInstanceGuid":"wvEV4fJZQ0nfgY-TN2LekA",
              "timeWindowGuid":"sUaNAasy_LiT3_IYa1Fx_dSv",
              "state":"not_yet_available",
              "type":"EventDayWindow"
            },
            {
              "sessionInstanceGuid":"IHDTSoj552vGDv1Qt7nXkg",
              "timeWindowGuid":"Bw6rAAeG6zotqes4cLSgKjh5",
              "state":"not_yet_available",
              "type":"EventDayWindow"
            }
          ],
          "type":"EventDay"
        }
      ]
    },
    "type":"EventDayAdherenceReport"
  },
  {
    "startEventId":"study_burst:ClinicVisit:02",
    "eventTimestamp":"2021-11-16T19:00:00.000Z",
    "entries":{
      "1":[
        {
          "sessionGuid":"vZBHBVv_H2_1TBbELF48czjS",
          "timeWindows":[
            {
              "sessionInstanceGuid":"zk7X4dQCy7Nvnuo2PcnSCA",
              "timeWindowGuid":"sUaNAasy_LiT3_IYa1Fx_dSv",
              "state":"not_yet_available",
              "type":"EventDayWindow"
            },
            {
              "sessionInstanceGuid":"rMRne-cbwIN5mkGZLymxzg",
              "timeWindowGuid":"Bw6rAAeG6zotqes4cLSgKjh5",
              "state":"not_yet_available",
              "type":"EventDayWindow"
            }
          ],
          "type":"EventDay"
        }
      ],
      "2":[
        {
          "sessionGuid":"vZBHBVv_H2_1TBbELF48czjS",
          "timeWindows":[
            {
              "sessionInstanceGuid":"QXM1cO6yb0gSPWzRwRD8eA",
              "timeWindowGuid":"sUaNAasy_LiT3_IYa1Fx_dSv",
              "state":"not_yet_available",
              "type":"EventDayWindow"
            },
            {
              "sessionInstanceGuid":"hCXFevxbBnpaUYjH212dsQ",
              "timeWindowGuid":"Bw6rAAeG6zotqes4cLSgKjh5",
              "state":"not_yet_available",
              "type":"EventDayWindow"
            }
          ],
          "type":"EventDay"
        }
      ]
    },
    "type":"EventDayAdherenceReport"
  }
]

In the second API, which we probably need to calculate with a worker, only entries with active elements would be returned. That can be more than one record for a given participant, and we might simply implement a filter on this server process to retrieve the required records. However, we may need caching/persistence because this only changes when an adherence record is submitted for a user.

Caching individual participant adherence reports

There are options:

  • Cache them with Redis (probably my least-preferred option since domain logic will be based on these reports);

  • Write them to a participant report either when requested, or on a nightly basis, and expose only reports through the APIs (treating reports as a caching mechanism);

  • Write the report back to the database (actually more performant than calculating in memory? Not proven). If we want paginated views of individual performance status, then we’ll need to define a schema for the base information and store it back to a table.

The reports only need to change when the user submits an event or an adherence record (or when the timeline changes, which would invalidate quite a lot). Typically we delete the cached item on these writes and let the next read re-create the cached item.

Arm and study summaries

Not sure what the requirements are for these, but worker processes would periodically (or when prompted by a message) calculate and store these. The logical place would be study reports (but again a SQL table is an option).

Messaging APIs

One UI design I have seen shows a list of notifications in an administrative UI, indicating specific participants that are out-of-adherence with the study protocol. This hints at a very large amount of functionality.

We have had requirements to message both participants and study administrators over the years. For example, very early on we embedded a blog in an app for this purpose. Then we implemented support for remote notifications and topics (rarely used). We often message participants through local notifications based on scheduling information. Now we are talking about showing notifications to administrators about participants being out-of-adherence in an administrative UI.

I would like to add a simple but complete messaging system to Bridge which could be used to persist, deliver, and record the consumption of messages. Features would include:

  • API would be push-pull (write a message, request a list of active messages for a recipient; push-push is a lot more complicated);

  • recipients could include individuals or organizations (and other groups once we have them);

  • each recipient could mark the message as “read” to remove it from their UIs, separate from others, or “resolved” to remove it from everyone’s UIs. They would not be deleted so they are auditable;

  • messages could be set to expire;

  • Messages could indicate if they can be read, resolved, or if they expire. For example, an adherence message might only allow you to mark it “resolved” since you don’t want multiple study coordinators all following up with a participant.

  • Messages could be assigned (change recipients), and indicate a status like “busy” along with the user currently working on the message. Now you have a very simple way to hand off work between team members.

Once pull messaging is working, we could support “push” messaging (where we send a message via email or SMS, or possibly through remote or local push notifications). This introduced a lot of additional complexity, however:

  • It needs to be integrated with Amazon’s messaging APIs, where we are recording “sent” status. The advantage is that we’d have a record of such messaging, which is part of the study protocol and something that we’ve put together one-off solutions for in the past for scientists;

  • We’d want to implement messaging preferences, per user/organization and possibly by message type.

  • No labels