Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. The schedule itself, which is finite in length and described by a timeline;

  2. The set of adherence records for the participant which describe in some detail what the participant has done in that timeline;

  3. A set of event timestamps that orient tell us where the participant is at in the timeline and tell us where (ie what the user should have done and what they should currently be doing in terms of performing the assessments of the studythe timeline).

So far the Bridge server provides this information through separate APIs and does not keep track of the time of the participants. For the sake of efficiency, I think we want the Bridge server to combine this information and provide reports on the status of the account, which will be less resource intensive that serving out all the information to be processed by a client or worker process. The issues I see with this:

  1. For the first time the server will need to have knowledge of the participant’s time and time zone;

  2. The reports are (relatively) expensive to calculate , in bulk and they are not likely to change frequently, and write seldom, read frequently in nature, so they will need to be the building block for arm and study summary reports, so they will be read often, write seldom and will need to be aggressively cached.

...

  1. cached.

API (Event Day Adherence Report)

Given the designs I have seen so far, I created the “event day” report which would be available via a single API:

Method

Path

Description

GET

/v5/studies/{studyId}/participants/{userId|self}/adherence/eventday

GET

/v5/studies/{studyId}/participants/adherence/eventday

Given Lynn’s designs, here’s what I think the JSON would look like. (I have prototyped this; it’ll work though it involves loading everything about the user.) The single biggest thing about her designs is that she regroups the time window event streams, into each encompassing session instance. Also note that in the view below, we return the entire timeline for the user, but it is possible to calculate the active components or adherence percentages from the time window completion states. Finally, this view assumes current timestamps only; such a view could be constructed for all timestamps, and I don’t know how that would be organized.

...

languagejson

...

The JSON looks as follows:

Code Block
languagejson
[
  {
    "startEventId":"study_burst:ClinicVisit:01",
    "eventTimestamp":"2021-10-27T19:00:00.000Z",
    "entries":{
      "1":[
        {
          "startEventIdsessionGuid":"study_burst:ClinicVisit:01vZBHBVv_H2_1TBbELF48czjS",
    "eventTimestamp":"2021-10-27T19:00:00.000Z",      "entriestimeWindows":{[
       "1":[     {
   {           "sessionGuidsessionInstanceGuid":"vZBHBVv_H2_1TBbELF48czjSePcCf6VmfIiVuU0ckdBeRw",
          "timeWindows":[             {
              "sessionInstanceGuid":"ePcCf6VmfIiVuU0ckdBeRw",
              "timeWindowGuid":""timeWindowGuid":"sUaNAasy_LiT3_IYa1Fx_dSv",
              "state":"not_yet_available",
              "type":"EventDayWindow"
            },
            {
              "sessionInstanceGuid":"DB13D4mO72j6S-g7PIkI2Q",
              "timeWindowGuid":"Bw6rAAeG6zotqes4cLSgKjh5",
              "state":"not_yet_available",
              "type":"EventDayWindow"
            }
          ],
          "type":"EventDay"
        }
      ],
      "2":[
        {
          "sessionGuid":"vZBHBVv_H2_1TBbELF48czjS",
          "timeWindows":[
            {
              "sessionInstanceGuid":"wvEV4fJZQ0nfgY-TN2LekA",
              "timeWindowGuid":"sUaNAasy_LiT3_IYa1Fx_dSv",
              "state":"not_yet_available",
              "type":"EventDayWindow"
            },
            {
              "sessionInstanceGuid":"IHDTSoj552vGDv1Qt7nXkg",
              "timeWindowGuid":"Bw6rAAeG6zotqes4cLSgKjh5",
              "state":"not_yet_available",
              "type":"EventDayWindow"
            }
          ],
          "type":"EventDay"
        }
      ]
    },
    "type":"EventDayAdherenceReport"
  },
  {
    "startEventId":"study_burst:ClinicVisit:02",
    "eventTimestamp":"2021-11-16T19:00:00.000Z",
    "entries":{
      "1":[
        {
          "sessionGuid":"vZBHBVv_H2_1TBbELF48czjS",
          "timeWindows":[
            {
              "sessionInstanceGuid":"zk7X4dQCy7Nvnuo2PcnSCA",
              "timeWindowGuid":"sUaNAasy_LiT3_IYa1Fx_dSv",
              "state":"not_yet_available",
              "type":"EventDayWindow"
            },
            {
              "sessionInstanceGuid":"rMRne-cbwIN5mkGZLymxzg",
              "timeWindowGuid":"Bw6rAAeG6zotqes4cLSgKjh5",
              "state":"not_yet_available",
              "type":"EventDayWindow"
            }
          ],
          "type":"EventDay"
        }
      ],
      "2":[
        {
          "sessionGuid":"vZBHBVv_H2_1TBbELF48czjS",
          "timeWindows":[
            {
              "sessionInstanceGuid":"QXM1cO6yb0gSPWzRwRD8eA",
              "timeWindowGuid":"sUaNAasy_LiT3_IYa1Fx_dSv",
              "state":"not_yet_available",
              "type":"EventDayWindow"
            },
            {
              "sessionInstanceGuid":"hCXFevxbBnpaUYjH212dsQ",
              "timeWindowGuid":"Bw6rAAeG6zotqes4cLSgKjh5",
              "state":"not_yet_available",
              "type":"EventDayWindow"
            }
          ],
          "type":"EventDay"
        }
      ]
    },
    "type":"EventDayAdherenceReport"
  }
]

In the second API, which we probably need to calculate with a worker, only entries with active elements would be returned. That can be more than one record for a given participant, and we might simply implement a filter on this server process to retrieve the required records. However, we may need caching/persistence because this only changes when an adherence record is submitted for a user.

Adherence Event Day Report
Basic structure of adherence record (per participant) is:

  • durationInDays (the highest endDay value calculated by the timeline, which can be lower that the schedule duration, but the schedule duration will cap any open-ended sequences);

  • Enum studyAdherence (compliant, not in compliance, some measure of this);

  • Map<Stream,List<SessionAdherence>> adherenceTimeline;

There’s nothing in this about event sequences that haven’t happened because they haven’t been triggered by an event. We might need to encode, in a schedule, what is desired in terms of event generation.

There’s nothing about individual sessions or assessments being optional. There’s no easy way to determine adherence for persistent time windows, so we’ll assume that performing such a window one time is compliant…maybe.

Stream

The timeline describes parts of scheduling that can be repeated, if the triggering events can change over time. Study bursts are also an example of repeating parts of scheduling. Each of these things represents a separate stream.

  • label (the session name, probably)

  • startEventId

  • eventTimestamp

  • session window GUID (each time window generates one stream)

SessionAdherence (might be possible to edit the state if coordinator knows state better). In visual designs, all time windows are grouped under each session (in our server model, each window creates a separate stream of activities…but we would want a set of tests). So we might have a set of WindowAdherence records, as below, in one session adherence container (not sure there’s much in the container).

  • state
    - not yet available NOT_YET_AVAILABLE
    - available but unstarted UNSTARTED
    - started STARTED
    - successfully completed COMPLETED
    - expired unfinished ABANDONED
    - expired never started IGNORED/NONCOMPLIANT

  • instanceGuid

  • sessionGuid

  • startDay

  • endDay

  • startTime

  • expiration

SessionAdherence needs to boil down the completed state of the session based on the timestamps, I think. User would be out of compliance if some of their states are expired. Should probably be possible to calculate these records and then store them so Bridge can deliver them. So a summary of the state of the record should probably also be saved for efficient querying.

Caching individual participant adherence reports

There are options:

  • Cache them with Redis (probably my least-preferred option since domain logic will be based on these reports);

  • Write them to a participant report either when requested, or on a nightly basis, and expose only reports through the APIs (treating reports as a caching mechanism);

  • Write the report back to the database (actually more performant than calculating in memory? Not proven). If we want paginated views of individual performance status, then we’ll need to define a schema for the base information and store it back to a table.

...


          ],
          "type":"EventDay"
        }
      ]
    },
    "type":"EventDayAdherenceReport"
  }
]

All sessions in the timeline are combined in this view so that each entry is for one event (one entry for every event and only one entry), and then a list by day of the activities that the participant should perform as a result of that event. All potential events in the schedule are included in this report whether they exist for the user or not. Then the actual “days since each event” are calculated to determine what the state of these assessments are:

State

Description

Adherence

not_yet_available

Participant should not have seen or started this assessment.

N/A

unstarted

Participant should see the assessment (they are being asked to do it now), but they have not started it.

unknown

started

Participant has started the assessment.

unknown

completed

Participant has completed the assessment before it expired and was removed from the UI.

compliant

abandoned

Participant started the assessment but it expired before they finished it.

noncompliant

ignored

Participant did not start the assessment; it is now expired.

noncompliant

I would propose that a participant’s noncompliance percentage is equal to noncompliant / (compliant + noncompliant + unknown). We can then set a threshold at which we would want to intervene (from 0% — any failure gets reported — to something less stringent).

Adherence Event Day Report
Basic structure of an adherence record (per participant) is:

  • durationInDays (the highest endDay value calculated by the timeline, which can be lower that the schedule duration, but the schedule duration will cap any open-ended sequences);

  • Enum studyAdherence (compliant, not in compliance, some measure of this);

  • Map<Stream,List<SessionAdherence>> adherenceTimeline;

There’s nothing in this about event sequences that haven’t happened because they haven’t been triggered by an event. We might need to encode, in a schedule, what is desired in terms of event generation.

There’s nothing about individual sessions or assessments being optional. There’s no easy way to determine adherence for persistent time windows, so we’ll assume that performing such a window one time is compliant…maybe.

Stream

The timeline describes parts of scheduling that can be repeated, if the triggering events can change over time. Study bursts are also an example of repeating parts of scheduling. Each of these things represents a separate stream.

  • label (the session name, probably)

  • startEventId

  • eventTimestamp

  • session window GUID (each time window generates one stream)

SessionAdherence (might be possible to edit the state if coordinator knows state better). In visual designs, all time windows are grouped under each session (in our server model, each window creates a separate stream of activities…but we would want a set of tests). So we might have a set of WindowAdherence records, as below, in one session adherence container (not sure there’s much in the container).

  • state
    - not yet available NOT_YET_AVAILABLE
    - available but unstarted UNSTARTED
    - started STARTED
    - successfully completed COMPLETED
    - expired unfinished ABANDONED
    - expired never started IGNORED/NONCOMPLIANT

  • instanceGuid

  • sessionGuid

  • startDay

  • endDay

  • startTime

  • expiration

SessionAdherence needs to boil down the completed state of the session based on the timestamps, I think. User would be out of compliance if some of their states are expired. Should probably be possible to calculate these records and then store them so Bridge can deliver them. So a summary of the state of the record should probably also be saved for efficient querying.

Caching individual participant adherence reports

There are options:

  • Cache them with Redis (probably my least-preferred option since domain logic will be based on these reports);

  • Write them to a participant report either when requested, or on a nightly basis, and expose only reports through the APIs (treating reports as a caching mechanism);

  • Write the report back to the database (actually more performant than calculating in memory? Not proven). If we want paginated views of individual performance status, then we’ll need to define a schema for the base information and store it back to a table.

The reports only need to change when the user submits an event or an adherence record (or when the timeline changes, which would invalidate quite a lot). Typically we delete the cached item on these writes and let the next read re-create the cached item.

Protocol adherence

There is definitely a lot of possible ways we could measure adherence. However it can be measured as a proportion of compliant to non-compliant scheduled assessment states. We could then set a threshold for this (ie. warn administrators if there are any non-compliant assessments, vs. setting a proportion threshold that would need to be hit before notifying administrators). This can be calculated as part of the event day adherence report.

Arm and study summaries

Not sure what the requirements are for these, but worker processes would periodically (or when prompted by a message) calculate and store these. The logical place would be study reports (but again a SQL table is an option).

...