What is Parquet data?
Apache Parquet is an open source, column-oriented data file format designed for efficient data storage and retrieval. It provides efficient data compression and encoding schemes with enhanced performance to handle complex data in bulk.
Why should I care?
Assessment data collected as part of the mobile toolbox study is sent by the app to Bridge as a .zip archive of JSON files. This is difficult for analysts to work with, so we normalize the JSON data and write the normalized data as (usually multiple) Parquet datasets to an S3 bucket. Because of its columnar format, Parquet data is easy to perform map-reduce style operations upon. There exists software (like Apache Arrow) for reading specific columns or partitions of Parquet data into memory directly from cloud storage like S3. For instructions on how to access this data in S3 and serialize it as a data frame in Python or R, see Getting Started .
Parquet Datasets
Parquet datasets consist of one or more Parquet files which share the same schema and are partitioned by a directory hierarchy.
Partitions
We use the following partitions in our Parquet datasets:
assessmentid
is the Bridge assessment identifier of this data. year
, month
, and day
are derived from the uploadedOn
date. The uploadedOn
date is not always the same as the date the assessment was complete. To speed up serializing a Parquet dataset as a data frame by only reading partitions matching a query, see Getting Started.
How are Bridge Downstream Parquet datasets organized?
Our Parquet dataset names are derived from three pieces of information:
The JSON schema identifier
The Parquet dataset schema version
The JSON hierarchy identifier
1. The JSON schema identifier
Every JSON file included in the .zip archive sent by the app to Bridge conforms to a JSON Schema. These JSON Schema have an $id
property which acts as a unique identifier. We derive a JSON schema identifier from the base name minus any extension of the $id
property. For example, if a piece of JSON data conforms to this JSON Schema, which has $id
https://sage-bionetworks.github.io/mobile-client-json/schemas/v2/ArchiveMetadata.json
, then the schema identifier is ArchiveMetadata
.
Determining the JSON Schema of JSON data
As previously mentioned, assessment data is sent by the app to Bridge as a .zip archive of JSON data. This assessment data has an assessment identifier and an assessment version. These pieces of information, along with the file name, are mapped to a JSON Schema in For more information on the JSON Schema used in assessments, see https://github.com/Sage-Bionetworks/mobile-client-json .
2. The Parquet dataset schema version
The Parquet dataset schema version differentiates Parquet dataset schemas within the same JSON schema identifier. In some cases, data which conforms to different JSON Schema can be written to the same Parquet dataset under the same Parquet dataset schema. The important thing to understand about this component is that Parquet datasets with the same JSON schema identifier but different Parquet dataset schema versions have different Parquet dataset schemas. For example, the Parquet dataset dataset_ArchiveMetadata_v1
and dataset_ArchiveMetadata_v2
have different schemas.
3. The JSON hierarchy identifier
The JSON hierarchy identifier explicitly spells out where in the JSON data hierarchy this data comes from. For example, if you are looking for the startDate
of each step in the assessment and the JSON data looks like this:
{
"stepHistory": [
{
"type": "edu.northwestern.mobiletoolbox.mfs.serialization.MfsStepResult",
"identifier": "MFS_welcome",
"position": 1,
"startDate": "2022-04-08T20:16:02.412-05:00",
"endDate": "2022-04-08T20:16:07.510-05:00"
}
]
}
Then the JSON hierarchy identifier of the parquet dataset containing this data is stepHistory
, since stepHistory
is a list of objects where each object has a startDate
property. If startDate
had instead been an object, rather than a string, than the parquet dataset containing this data would have the JSON hierarchy identifier stepHistory_startDate
.
JSON to Parquet datasets example
TODO