Table of Contents | ||||||
---|---|---|---|---|---|---|
|
...
- Downloads Count
- Page Views
- Data Breaches/Audit Trail
Downloads Count
This is the main statistic that the users are currently looking for, it provides a way for project owners, funders and data contributor to monitor the interest over time in the datasets published in a particular project, which then reflects on the interest on the project itself and it is a metric of the value provided by the data in the project. This kind of data is related specifically to the usage of the platform by synapse users, since without being authenticated the downloads are not available. This is part of a generic category of statistics that relates to the entities and metadata that is stored in the backend and it's only a subset of aggregate statistic that can be exposed (e.g. number of projects, users, teams etc).
Page Views
This metric is also an indicator to monitor the interest but it plays a different role and focuses on the general user activity over the synapse platform as a whole. While it might be an indicator for a specific project success it captures a different aspect that might span to different type of clients used to interface on the Synapse API and that include information about users that are not authenticated into synapse. For this particular aspect there are tools already integrated (E.g. google analytics) that collect analytics on the user interactions. Note however that this information is not currently available to the synapse users, nor setup in a way to produce information about specific projects pages, files, wikis etc.
Data Breaches/Audit Trail
Another aspect that came out and might seem related is the identification of when/what/why of potential data breaches (e.g. a dataset was released even though it was not supposed to). This relates to the audit trail of users activity in order to identify potential offenders. While this information is crucial it should not be exposed by the API, and a due process is in place in order to access this kind of data.
Project Statistics
With this brief introduction in mind this document focuses on the main driving use case, that is:
- A funder and/or project creator would like to have a way to understand if the project is successful and if its data is used.
There are several metrics that can be used in order to determine the usage and success of a project, among which:
- Project Access (e.g. page views)
- Number of Downloads
- Number of Uploads
- User Discussions
...
Files, Downloads and Uploads
Files in synapse are referenced through an abstraction (FileHandle) that maintain the information about the link to the content of the file itself (e.g. an S3 bucket). A file handle is then referenced in many places (such as FileEntity and WikiPage, see FileHandleAssociateType) as pointers to the actual file content. In order to actually download the content the synapse platform allows to generated a pre-signed url (according to the location where the file is stored) that can be used to directly download the file. Note that the platform has no way to guarantee that the pre-signed url is actually used by the client in order to download a file. Every single pre-signed url request in the codebase comes down to a single method getURLForFileHandle.
...
Association Type | Average Daily Downloads | Max Daily Downloads | Average daily users | Max daily users |
---|---|---|---|---|
All | 39998 | 383539 | 140 | 227 |
FileEntity | 19963 | 367345 | 59 | 100 |
TableEntity | 19680 | 145050 | 12 | 21 |
Proposed API Design
The current API design uses a polymorphic approach to access certain kind of objects, in particular related to Projects, Folders and Files (See Entities, Files, and Folders Oh My!), exposing a single prefixed endpoint for CRUD operations (/entity). This has the effect that a GET request to the /entity/{id} endpoint for a specific entity might return a different response type (See the Entity Response Object). For a first implementation we would need an endpoint to expose statistics about a project.
In general computing statistics might be an expensive operation, moreover in the future we might want to extend the API to include several different types of statistics. For the current use case we can potentially pre-compute statistics aggregates monthly for the projects so that we can serve them relatively quickly with a single lookup. While we cannot guarantee that all kind of statistics that will be exposed in the future can be pre-computed and will return within a reasonable response time to ease the clients integration we propose to have a dedicated endpoint that will return the statistics for a project within a single synchronous HTTP call.
...
Endpoint | Method | Description | Response Type | Restrictions |
---|---|---|---|---|
/statistics/project/{projectSynId} | GET | Allows to get the statistics for the given project | ProjectStatistics |
|
...
Property | Type | Description |
---|---|---|
lastUpdatedOn | Date | Contains the last update date for the download/upload statistics, this value provide an approximation on the freshness of the statistics. This value might be null, in which case the download/upload statistics are not currently available. |
monthly | ARRAY<StatisticsCountBucket> | An array containing the monthly download/upload count for the last 12 months, each bucket aggregates a month worth of data. The number of buckets is limited to 12. Each bucket will include the unique users count for the month. |
...
Proposed Backend Architecture
In order to serve the statistics from the synapse API we need a way to efficiently access the statistics without heavy loading the web instances of the API.
To this end we propose an architecture that leverage various AWS services in order to collect relevant events from the synapse API and service calls, store them for long term analitics and create aggregates that can be efficiently queried from the synapse services.
In the following we provide an high level architecture of the components involved:
In the following we provide an high level architecture of the components involved:
In particular the following key components are integrated into the system:
AWS Kinesis Firehose: Allows to collect events records from the Synapse API, convert the records into an columnar format such as Apache Parquet and store the stream to an S3 destination bucket
AWS Glue: Glue is used to build the catalog of tables used both by Kinesis Firehose for the record conversion and by Athena to efficiently query the data stored in S3
- Is
: Uses the Presto SQL engine and can be used to directly query the data produced by kinetics firehose, the data will be stored using the Apache Parquet format thanks to the Kinesis Firehose automatic conversion that allows to reduce both the storage and query runtime
Kinesis Firehose
The idea is to use Kinesis Firehose to send the events we are interested in (e.g. file upload and file download) as json records, the kinesis stream will funnel the records to firehose that will be converted to the columnar format Apache Parquet (the table schema is created and managed in AWS glue) and stored to an S3 bucket.
For the first phase we collect statistics for download and uploads, for each type of event we will have a separate stream, an example of JSON object sent to the kinesis stream for a
...
download event:
Code Block | ||
---|---|---|
| ||
{
"timestamp": "1562626674712",
"stack": "dev",
"instance": 123,
"projectId": 456,
"userId": 5432,
"associationType": "FileEntity",
"associationId": 12312
"fileId": 6789
} |
Athena
Once the data is in S3 it can be queried with AWS Athena with standard SQL. For the JSON schema example above we can run an SQL query grouping for example by projectId (and filtering by timestamp) and counting the records as well as the distinct users (Note: Athena uses Presto as query engine, that supports approximate aggregations such as approx_distinct).
Athena can be accessed using the Amazon SDK directly in Java. This allows us to implement synapse background workers that periodically queries the data using Athena in order to compute and store manageable aggregates that can be queried directly from the synapse services.
statistics_last_update
that simply stores the statistic type (e.g. statistics_project_monthly_download and statistics_project_monthly_upload) and its last update time.statistics_project_monthly
with the following columns:Code Block | ||
---|---|---|
| ||
statistics_project_monthly <project_id, year, month, download_count, download_users_count, upload_count, upload_users_count> |
Note that when the aggregate query is run from a worker, only the projects for which at least one download or upload was recorded will be stored in the table, if no download and uploads were performed on a particular project (e.g. no record is found) the API can simply return a 0 count.
Note also that we can implement a worker per type of statistics, e.g. one that queries the downloads and one that queries the uploads and both store the results in the same table for now. For the moment we can simply have one worker per statistics aggregate.
...
Given that we will query the data stored in S3 month by month, we can partition the data directly so that Athena will scan only the needed records (See https://docs.aws.amazon.com/athena/latest/ug/partitions.html). We can initially create partitions day by day so that we do not restrict ourselves to always load a month of data for each query that we run using athena, this will produce slightly slower queries for the worker as multiple partitions will need to be loaded, but will gives us more flexibility.
Statistics Tables
In a first phase we only provide monthly aggregates per project to the client, to this end we can store the monthly aggregates into RDS using dedicated statistics tables. If we need to store more fine grained statistics (e.g. daily, or at the file level) we can move in a later moment to a more scalable solution (e.g. DynamoDB seems a good fit).
In order to compute the monthly aggregates we want to make sure that the workers only queries the current month (unless the last successful run was past the current month). For each type of statistics we store the timestamp of the last successful run into a dedicated table statistics_status
:
The workers will query this table in order to get the months for which the aggregates needs to be performed (Note that each time a worker runs the query will need to scan all the data for each considered month as we need to get estimates on the number of unique users that downloaded/uploaded files on a monthly basis).
The monthly aggregates for each project is then stored into a dedicated table statistics_project_monthly
:
The table contains for each project, year and month the count of downloads, uploads, the number of unique users that downloaded and uploaded files as well as the last time the statistic for that particular month was updated (per download and upload). This table can be queried directly by the service layer in the synapse API in read only.
Statistics Workers
We will initially need at least two different workers that will aggregate the statistics periodically using the Athena SDK:
- DownloadStatisticsAggregator: Will run every X hours (configurable, but probably 6 to 12 hours should be enough), run the SQL query on the S3 data for the downloads streams for the current month (or previous months according to the statitistics_status last_update value) and update the statistics_project_monthly table above.
- UploadStatisticsAggregator: Similar to the DownloadStatitisticsAggregator will perform the aggregation for uploads, updating the statistics_project_monthly table
Both the workers will need to batch the query result into a smaller set, potentially running multiple transactions to save the data.
This initial setup will allow us to serve the statistics for a given project, but poses some potential issues:
- Zero downloads/uploads problem: We can avoid storing unnecessary data for each project if for a given month there were no uploads or downloads, at the application level we can simply return a 0 count if the record is not in the table for a given month. This poses a problem since we cannot discriminate between a zero count or a "not yet computed" case. We can work around this by using the global last_update. if for a given project we never have downloads or uploads but we know that we last updated recently we can assume that we had no downloads or uploads in the past.
- Last update time: The last update time for a given statistics (download and/or upload) is valid only if the project has had in the last month at least one download or one upload. We can use the global last_update for the given statistics from the statistics_status table instead, but it's still an approximation: updating the statistics table might take some time, multiple transactions are potentially needed and even though the statistics for a given project can be retrieved the last_might not reflect the fact that the worker is still in progress.
The issues above might not be considered critical, as statistics are anyway used as an indication and not taken strictly. We could potentially solve both problems implementing a stricter synchronization of projects and statistics with a set of additional workers:
- MonthlyStatisticsInitializer: The worker would need to run at the beginning of each month (or run periodically with a saved state) to pre-populate the
statistics_project_monthly
table for each project for the current month with a 0 count and no last_update. This would allow the previous workers to perform only updates on the project ids that they gathered from the aggregation. - NewProjectListener: The previous worker alone is not enough as new projects are created, the purpose of this worker would be to poll the appropriate SQS queue for new projects and initialize the record for the current month to 0 in the
statistics_project_monthly
table.
Additionally we might want to store in the statistics_status
table the initial date when we started collecting statistics, this would allow us to truncate the months past that date so that we can report an "unknown" status to the client.
Web Client Integration
The web client should have a way to show the statistics for a project or a specific file, some initial ideas:
...