Document toolboxDocument toolbox

Getting Started

Data Formats

Digital health data is exported by Bridge to Synapse Projects in two formats: raw and parquet.

Raw Data

Raw data is the data exactly as it has been sent by the app to Bridge. Bridge exports this data to Synapse as Synapse File entities. The raw data can be found in the study’s Synapse project under the “Files” tab in a folder called “Bridge Raw Data”. Additionally, raw data and its metadata can be viewed in a view under the “Tables” tab. The view will be named “Bridge Raw Data View”. We don’t recommend working directly with raw data, although you may find the view convenient for working with the metadata.

Parquet Data

Parquet data is a normalized version of the raw data. It can be found in the study’s Synapse project under the “Files” tab in a folder named “parquet”. Don’t freak out if this folder is empty! This is by design. Underneath the folder name will be text similar to the following:

This tells us where the data really is. We use Synapse to control access to this data, although all the real action happens in S3.

Parquet Datasets

The Parquet data is stored in S3 as a Parquet dataset, a collection of Parquet files partitioned by a folder hierarchy. For an explanation of how this data is organized, please see Understanding Parquet Datasets .

Accessing the Data

We assume that you already have access to the Synapse project containing your study data. If you don’t, please talk with your PI or whoever configured the study in the Bridge Study Manager. The Bridge Downstream team cannot unilaterally grant you access to any digital health data.

To access the Parquet data, it’s necessary to authenticate with AWS so that their services can grant access to the data in S3. Synapse makes this authentication easy with STS tokens. STS tokens can be retrieved using the Python, R, Java, or command line Synapse clients. Instructions are provided below for Python and R.

1. Install clients

Because of the way we will be interfacing with the data, we don’t need to install an AWS client. Instead, we will use the Synapse client to authenticate with Synapse and retrieve an STS token. We will then use an Arrow client to load the parquet data directly from S3. To work with the parquet data as a data frame in R we also need the dplyr dependency.

Python

R

2. (optional) Configure Synapse credentials

There are multiple ways to cache Synapse credentials so that you don’t need to type your username and password each time you log in with the Synapse client. One of the simplest ways to do so, which works for both Python and R clients, is to edit the following section in the .synapseConfig file, which is written to the home directory (~) when installing the Synapse client. Don’t forget to uncomment the [authentication] header and log in parameters.

########################### # Login Credentials # ########################### ## Used for logging in to Synapse ## Alternatively, you can use rememberMe=True in synapseclient.login or login subcommand of the commandline client. [authentication] username = <username> authtoken = <authtoken>

Complete documentation on how to configure the client can be found here.

3. Authenticate with AWS using an STS token

The below code defines a global variable PARQUET_FOLDER. This is the Synapse ID of the parquet folder described in the “Parquet Data” section above. Change this value to match the ID of the parquet folder specific to your project.

Python

import synapseclient from pyarrow import fs, parquet PARQUET_FOLDER = "syn00000000" syn = synapseclient.login() # Pass your credentials # or configure your .synapseConfig # Get STS credentials token = syn.get_sts_storage_token( entity=PARQUET_FOLDER, permission="read_only", output_format="json") # Pass STS credentials to Arrow filesystem interface s3 = fs.S3FileSystem( access_key=token['accessKeyId'], secret_key=token['secretAccessKey'], session_token=token['sessionToken'], region="us-east-1")

R

# Optional, we use :: operators to make the namespace explicit library(synapser) library(arrow) library(dplyr) PARQUET_FOLDER <- "syn00000000" synapser::synLogin() # Pass your credentials # or configure your .synapseConfig # Get STS credentials token <- synapser::synGetStsStorageToken( entity = PARQUET_FOLDER, permission = "read_only", output_format = "json") # Pass STS credentials to Arrow filesystem interface s3 <- arrow::S3FileSystem$create( access_key = token$accessKeyId, secret_key = token$secretAccessKey, session_token = token$sessionToken, region="us-east-1")

For those who are curious, full documentation on using STS with Synapse can be found here.

4. Read parquet dataset as a data frame

Now that we have an S3 file system interface, we can begin interfacing with the S3 bucket. An explanation of how the parquet datasets are organized within the S3 bucket can be found on the page Understanding Parquet Datasets . To view which parquet datasets are available to us:

List Parquet datasets

Python

R

Example output:

Each of the above is a parquet dataset, with the exception of owner.txt. We can read a parquet dataset into memory directly from S3.

Serialize Parquet dataset as data frame

Python

R

In the above example, we loaded the entire parquet dataset – all metadata across all assessments – into memory. In some cases this isn’t desirable. For example, we might only want to work with data from a specific assessment. To do so, we can take advantage of the partition field assessmentid, and only read data from a specific assessment. Assuming there is an assessment called memory-for-sequences :

Filter and serialize Parquet dataset as data frame

Python

R

You can filter on non-partition columns as well, but this isn’t any faster than serializing the entire parquet dataset. The only advantage is that you won’t need to store the entire dataset in memory, which can be particularly useful for motion or other time-series data.

Conclusion

You now understand the basics of serializing assessment data from a parquet dataset. For more information about interpreting and working with the parquet datasets, see Understanding Parquet Datasets .