Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

I would propose that a participant’s noncompliance percentage is equal to noncompliant / (compliant + noncompliant + unknown). We can then set a threshold at which we would want to intervene (from 0% — any failure gets reported — to something less stringent).

TODO: How do we represent persistent time windows? I think I have said elsewhere that performing it one time = compliant, but that might not be correct.

TODO: There’s nothing here about whether or not specific events should be generated for the user…we’re assuming that the user must do everything in the schedule.

TODO: This view operates on the most recent timestamp for each event. Study bursts generate multiple events and will work with this, but do we have cases where a part of the timeline is supposed to repeat due to an event being updated? For example, if the event is “visited the clinic” and the study expects participants to visit four times over the length of the study, we cannot capture this as an expectation. They’d have to create four separate events for this, they couldn’t just update “clinic_visited” four times.

Study-level paged APIs

These reports are not easily cacheable because the states depend on the current time of the request. A single report takes about 1/3 of a second, so paginated lists of user would be prohibitively expensive to calculate. Let’s assume we have a worker process that creates some form of update (probably of just the active days of the report) and caches it for the list view API. What this means is that the list view will be behind the current state by a day or so. Coordinators could go in and look at individual accounts for more up-to-date information.

In past work I have also created “sweepers” for things like sporting event games that just endlessly loop and refresh caches. Given half a second for each record, for example, all of mPower could be updated every 16 minutes.

...

Method

...

Path

...

GET

...

Persistent time windows will be excluded from all adherence reports. Completing assessments that are part of a persistent window do not count for or against adherence.

All of these views operate on the most recent timestamps for all events. Building schedules that rely on a mutable event changing, and triggering a new timeline of sessions to perform, will not work with these adherence APIs. That would be events like “session type X has been completed.” OTOH, it will show compliance with the most recent time stream, and that might be all that matters. Past time streams are no longer actionable.

SessionsOnDate Report

This is a date-range list API that returns a set of SessionsOnDate objects.

Code Block
public class SessionsOnDate {
  String userId;
  LocalDate date;
  List<SessionOnDate> sessions;
}

This is a flat report that lists all sessions that have a startDate/endDate that overlaps with the date of the record (remember that a session really exists for each window in each session in a schedule). In the database, this is a set of records that will be grouped. For this reason, the date range is low: 1-7 days only. The database SessionOnDate table would look something like this:

Code Block

Here is a visual design for this feature…this is all I got on requirements at this point:

...