...
All sessions in the timeline are grouped in this view by the event that triggers them, and then the number of days since that event. All potential events in the schedule are included in this report whether they exist for the user or not (we don’t currently have a way to say “count this part of the schedule if the event exists for the user, but don’t count it if the event doesn’t exist for the user). Then the actual “days since each event” are calculated to determine what the state of each time window is (the window state is a session-level measure of the state, derived from the adherence records of the assessments in that session).
Basically we figure out what they should be doing, and then look at their adherence records to figure out if they’re doing it.
The states are:
State | Description | Adherence |
---|---|---|
not_yet_available | Participant should not have seen or started this session. It’s in the future. | N/A |
unstarted | Participant should see the session (they are being asked to do it now), but they have not started it. | unknown |
started | Participant has started the session. | unknown |
completed | Participant completed the session before it expired. | compliant |
abandoned | Participant started or finished at least one assessment in the session, but there was more work to do and it expired before they finished it. | noncompliant |
expired | Participant did not start the session and it is now expired. | noncompliant |
...
TODO: This view operates on the most recent timestamp for each event. Study bursts generate multiple events and will work with this, but do we have cases where a part of the timeline is supposed to repeat due to an event being updated? For example, if the event is “visited the clinic” and the study expects participants to visit four times over the length of the study, we cannot capture this as an expectation. They’d have to create four separate events for this, they couldn’t just update “clinic_visited” or some suchfour times.
Study-level paged APIs
These reports are not easily cacheable because the states depend on the current time of the request. A single report takes about 1/3 of a second, so paginated lists of user would be prohibitively expensive to calculate. Let’s assume we have a worker process that creates some form of update (probably of just the active days of the report) and caches it for the list view API. What this means is that the list view will be behind the current state by a day or so. Coordinators could go in and look at individual accounts for more up-to-date information.
...