Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Initially we referred to this as compliance, but because “compliance” also has meaning in a governance context, we’re using “adherence” to describe the measurement of the extent to which a participant performs all the assessments they are asked to perform as part of their participation in a study.

In the v2 science APIs, Bridge has three APIs that are involved in schedule adherence:

  1. The schedule itself, which is finite in length and described by a timeline;

  2. The set of adherence records for the participant which describe in some detail what the participant has done in that timeline;

  3. A set of event timestamps that orient the participant in the timeline and tell us where they should be in terms of performing the assessments of the study.

So far the Bridge server provides this information through separate APIs and does not keep track of the time of the participants. For the sake of efficiency, I think we want the Bridge server to combine this information and provide reports on the status of the account, which will be less resource intensive that serving out all the information to be processed by a client or worker process. The issues I see with this:

  1. For the first time the server will need to have knowledge of the participant’s time and time zone;

  2. The reports are (relatively) expensive to calculate, in bulk they are not likely to change frequently, and they will need to be the building block for arm and study summary reports, so they will be read often, write seldom and will need to be aggressively cached.

Caching individual participant adherence reports

There are options:

  • Cache them with Redis (probably my least-preferred option since domain logic will be based on these reports);

  • Write them to a participant report either when requested, or on a nightly basis, and expose only reports through the APIs (treating reports as a caching mechanism);

  • Write the report back to the database (actually more performant than calculating in memory? Not proven). If we want paginated views of individual performance status, then we’ll need to define a schema for the base information and store it back to a table.

The reports only need to change when the user submits an event or an adherence record (or when the timeline changes, which would invalidate quite a lot). Typically we delete the cached item on these writes and let the next read re-create the cached item.

Arm and study summaries

Not sure what the requirements are for these, but worker processes would periodically (or when prompted by a message) calculate and store these. The logical place would be study reports (but again a SQL table is an option).

Messaging APIs

One UI design I have seen shows a list of notifications in an administrative UI, indicating specific participants that are out-of-adherence with the study protocol. This hints at a very large amount of functionality.

We have had requirements to message both participants and study administrators over the years. For example, very early on we embedded a blog in an app for this purpose. Then we implemented support for remote notifications and topics (rarely used). We often message participants through local notifications based on scheduling information. Now we are talking about showing notifications to administrators about participants being out-of-adherence in an administrative UI.

I would like to add a simple but complete messaging system to Bridge which could be used to persist, deliver, and record the consumption of messages. Features would include:

  • API would be push-pull (write a message, request a list of active messages for a recipient; push-push is a lot more complicated);

  • recipients could include individuals or organizations (and other groups once we have them);

  • each recipient could mark the message as “read” to remove it from their UIs, separate from others, or “resolved” to remove it from everyone’s UIs. They would not be deleted so they are auditable;

  • messages could be set to expire;

  • Messages could indicate if they can be read, resolved, or if they expire. For example, an adherence message might only allow you to mark it “resolved” since you don’t want multiple study coordinators all following up with a participant.

  • Messages could be assigned (change recipients), and indicate a status like “busy” along with the user currently working on the message. Now you have a very simple way to hand off work between team members.

Once pull messaging is working, we could support “push” messaging (where we send a message via email or SMS, or possibly through remote or local push notifications). This introduced a lot of additional complexity, however:

  • It needs to be integrated with Amazon’s messaging APIs, where we are recording “sent” status. The advantage is that we’d have a record of such messaging, which is part of the study protocol and something that we’ve put together one-off solutions for in the past for scientists;

  • We’d want to implement messaging preferences, per user/organization and possibly by message type.

  • No labels