Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Caching individual participant adherence reports

There are options:

  • Cache them with Redis (probably my least-preferred option since domain logic will be based on these reports);

  • Write them to a participant report either when requested, or on a nightly basis, and expose only reports through the APIs (treating reports as a caching mechanism);

  • Write the report back to the database (actually more performant than calculating in memory? Not proven). If we want paginated views of individual performance status, then we’ll need to define a schema for the base information and store it back to a table.

The reports only need to change when the user submits an event or an adherence record (or when the timeline changes, which would invalidate quite a lot). Typically we delete the cached item on these writes and let the next read re-create the cached item.

Protocol adherence

There is definitely a lot of possible ways we could measure adherence. However it can be measured as a proportion of compliant to non-compliant scheduled assessment states. We could then set a threshold for this (ie. warn administrators if there are any non-compliant assessments, vs. setting a proportion threshold that would need to be hit before notifying administrators). This can be calculated as part of the event day adherence reportThese reports are not easily cacheable because the states depend entirely on the current time. However, they are at a day level of resolution, so it we should be possible to cache them for less than a day. The cache can be invalidated when event timestamps or adherence records are updated.

There are options for where to store these. If we don’t need a historical record of compliance, we could use Redis. If we want a historical record of compliance, we could write these as participant reports, or we could write them back to the database.

Protocol adherence

There is definitely a lot of possible ways we could measure adherence. However it can be measured as a proportion of compliant to non-compliant scheduled assessment states. We could then set a threshold for this (ie. warn administrators if there are any non-compliant assessments, vs. setting a proportion threshold that would need to be hit before notifying administrators). This can be calculated as part of the event day adherence report.

Study-level summaries

On an initial timeline_retrieved and thereafter when the adherence records or events are updated, we could calculate and store the users event day adherence record. I do wonder about putting this on a queue and having the worker process make the request so it doesn’t slow down the request. The goal would be to expose a paginated API of all the active records (unstarted, started, and compliant) for all users.

Arm and study summaries

Not sure what the requirements are for these, but worker processes would periodically (or when prompted by a message) calculate and store these. The logical place would be study reports (but again a SQL table is an option).

...