This guide helps organizers create a space within Synapse to host a crowd-sourced Challenge. Challenges are community competitions designed to crowd-source new computational methods for fundamental questions in systems biology and translational medicine. Learn more about Challenges and see examples of past and current projects by visiting the Challenges and Benchmarking page.
A Challenge space provides participants with a Synapse project to learn about the Challenge, join the Challenge community, submit entries, track progress, and view results. This article will focus on:
Setting up the Challenge infrastructure
Launching the Challenge space
Setting up the Challenge wiki
Updating the Challenge
Interacting with the submitted entries
Before You Begin
Computing Power
At Sage Bionetworks, we generally provision an EC2 Linux instance for a challenge that leverages SynapseWorkflowOrchestrator to run CWL workflows. These workflows will be responsible for evaluating and scoring submissions (see model-to-data-challenge-workflow GitHub for an example workflow). If Sage Bionetworks is responsible for the cloud compute services, please give a general estimate of the computing power (memory, volume) needed. We can also help with the estimates if you are unsure.
By default, up to ten submissions can be evaluated concurrently, though this number can be increased or decreased accordingly within the orchestrator's .env file. Generally, the more submissions you want to run concurrently, the more power will be required of the instance.
Example
Let’s say a submission file that is very large and/or complex will require up to 10GB of memory for evaluation. If a max of four submissions should be run at the same time, then an instance of at least 40GB memory will be required (give or take some extra memory for system processes as well), whereas ten concurrent submissions would require at least 100GB.
The volume of the instance will be dependent on variables such as the size of the input files and the generated output files. If running a model-to-data challenge, Docker images should also be taken into account. On average, participants will create Docker images that are around 2-4 GB in size, though some have reached up to >10 GB. (When this happens, we do encourage participants to revisit their Dockerfile and source code to ensure they are following best practices, as >10 GB is a bit high).
Using Sensitive Data
Synapse has the ability to apply access restrictions to sensitive data (e.g. human data), so that legal requirements are met before participants can access it. If human data are being used in the Challenge, or if you have any questions about the sensitivity of the Challenge data, contact the Synapse Access and Compliance Team (act@sagebase.org) for support with the necessary data access procedures.
If data is sensitive and cannot be hosted on Synapse (e.g. it cannot leave the external site or data provider), provide a remote server with the following:
Support for Docker and, if possible,
docker-compose
If Docker is not allowed, then support for Singularity and Java 8 is required
SynapseWorkflowOrchestrator repository
If Sage is not allowed access to the server, then it is the external site’s responsibility to get the orchestrator running in whatever environment chosen. If Docker is not supported by the system, please let us know as we do have solutions for workarounds (e.g. using Java to execute, etc.).
Challenge Infrastructure Setup Steps
1. Create a workflow infrastructure GitHub repository for the Challenge. For the orchestrator to work, the repository must be public.
We have created two templates in Sage-Bionetworks-Challenges that you may use as a starting point. Their READMEs outline what will need to be updated within the scripts (under Configurations), but we will return to this later in Step 10.
a. data-to-model-challenge-workflow (submission type: prediction files)
b. model-to-data-challenge-workflow (submission type: Docker images)
2. Create the Challenge site on Synapse with the challengeutils Python package. The instructions to install and use this package are located in the Challenge Utilities Repository.
challengeutils create-challenge "challenge_name"
The create-challenge
command will create two Synapse projects: one staging site and one live site.
Staging - Organizers use this project during the challenge development to share files and draft the challenge wiki. The
createchallenge
command initializes the wiki with the DREAM Challenge Wiki Template.Live - Organizers use this project as the pre-registration page during challenge development, and it is replaced with a wiki once the challenge is ready to be launched. Organizers write the content of the wiki to provide detailed information about the challenge (e.g. challenge questions, data, participation, evaluation metrics). The wiki page must be made public to allow anyone to learn about the challenge and pre-register.
You may think of these two projects as development (staging project) and production (live project), in that all edits must be done in the staging site, NOT the live site. Maintenance of both projects enables wiki content to be edited and previewed in the staging project before the content is published to the live project. Changes to the live site are synced over with challengeutils
' mirror-wiki
(see Update the Challenge for more).
Note: At first, the live site will be just one page where a general overview about the Challenge is provided. There will also be a pre-register button that Synapse users can click on if they are interested in the upcoming Challenge.
For the initial deployment of the staging site to live, use synapseutils
' copyWiki
command, NOT mirror-wiki
(more on this under Launch the Challenge).
The create-challenge
command will also create four /wiki/spaces/DOCS/pages/1985446029 for the Challenge:
Participants - This Synapse team includes the individual participants and teams who register to the challenge.
Organizers - The challenge organizers must be added to this list to provide the permissions to share files and edit the wiki on the staging project.
Administrators - The challenge administrators have administrator access to both the live and staging project. Organizers do not need to be Administrators, ideally, administrators have a good understanding of Synapse.
Pre-registrants - This team is recommended for when the challenge is under development. It allows participants to join a mailing list to receive notification of challenge launch news.
Add Synapse users to the Organizers and Admin teams as required.
3. On the live site, go to the CHALLENGE tab and create as many /wiki/spaces/DOCS/pages/1985151345 as needed (for example, one per sub-challenge) by clicking on Challenge Tools > Create Evaluation Queue. By default, create-challenge
will create an evaluation queue for writeups, which you will already see listed here.
A writeup is something we require of all participants in order to be considered for final evaluation and ranking. A writeup should include all contributing persons, a thorough description of their methods and usage of data outside of the Challenge data, as well as all of their scripts, code, and predictions file(s)/Docker image(s). We require all of these so that we can ensure their code and final output is reproducible if they are a top-performer.
Important: the 7-digits in the parentheses following each evaluation queue name is its evaluation ID. You will need these IDs later for Step 9, so make note of them.
4. While still on the live site, go to the FILES tab and create a new folder called "Logs" by clicking on Files Tools > Add New Folder.
Important: This folder is where the participants' submission logs and prediction files are uploaded, so make note of its Synapse ID for use later in Step 9.
5. On the staging site, go to the FILES tab and create a new file by clicking on Files Tools > Upload or Link to a File > Link to URL.
For "URL", enter the web address to the zipped download of the workflow infrastructure repository. You may get this address by going to the repository and clicking on Code > right-clicking Download Zip > Copy Link Address:
Name the file (we generally use "workflow"), then click Save.
Important: This file will be what links the evaluation queue to the orchestrator, so make note of its Synapse ID for use later in Step 9.
6. Add an annotation to the file called ROOT_TEMPLATE by clicking on Files Tools > Annotations > Edit. The "Value" will be the path to the workflow script, written as:
{infrastructure workflow repo}-{branch}/path/to/workflow.cwl
For example, this is the path to workflow.cwl
of the model-to-data template repo:
model-to-data-challenge-workflow-main/workflow.cwl
Important: The ROOT_TEMPLATE annotation is what the orchestrator uses to determine which file among the repo is the workflow script.
7. Create a cloud compute environment with the required memory and volume specifications. Once it spins up, log into the instance and clone the orchestrator:
git clone https://github.com/Sage-Bionetworks/SynapseWorkflowOrchestrator.git
Follow the "Setting up linux environment" instructions in the README to install and run Docker, as well as docker-compose
.
8. While still on the instance, change directories to SynapseWorkflowOrchestrator/
and create a copy of the .envTemplate
file as .env
(or simply rename it to .env
):
cd SynapseWorkflowOrchestrator/ cp .envTemplate .env
9. Open .env
and enter values for the following property variables:
Property | Description | Example |
| Synapse credentials under which the orchestrator will run. The provided user must have access to the evaluation queue(s) being serviced. |
|
| Password for This can be found under My Dashboard > Settings. |
|
| Synapse ID for "Logs" folder. Use the Synapse ID from Step 4. |
|
| JSON map of evaluation IDs to the workflow repo archive, where the key is the evaluation ID and the value is the link address to the archive. Use the evaluation IDs from Step 3 as the key(s) and the Synapse ID from Step 5 as the value. |
|
Other properties may also be updated if desired, e.g. SUBMITTER_NOTIFICATION_MASK
, SHARE_RESULTS_IMMEDIATELY
, MAX_CONCURRENT_WORKFLOWS
, etc. Refer to the "Running the Orchestrator with Docker containers" notes in the README for more details.
10. Return to the workflow infrastructure repository and clone it onto your local machine. Open the repo in your editor of choice and make the following edits to the scripts:
Data-to-model template:
Script | What to Edit | Required TODO? |
| Update |
|
Set |
| |
Add metrics and scores to |
| |
| Update the base image if the validation code is not Python |
|
Remove the sample validation code and replace with validation code for the Challenge |
| |
| Update the base image if the validation code is not Python |
|
Remove the sample scoring code and replace with scoring code for the Challenge |
|
Model-to-data template:
Script | What to Edit | Required TODO? |
| Provide the admin user ID or admin team ID for (2 steps: |
|
Update |
| |
Set (2 steps: |
| |
Provide the absolute path to the data directory, denoted as |
| |
Set |
| |
Add metrics and scores to |
| |
| Update the base image if the validation code is not Python |
|
Remove the sample validation code and replace with validation code for the Challenge |
| |
| Update the base image if the validation code is not Python |
|
Remove the sample scoring code and replace with scoring code for the Challenge |
|
Push the changes up to GitHub when done.
11. On the instance, change directories to SynapseWorkflowOrchestrator/
and kick-start the orchestrator with:
docker-compose up
To have it run in the background, add the -d
flag (for detached mode):
docker-compose up -d
Note: it may be helpful to not run the orchestrator in detached mode at first, so that you will be made aware of any errors with the orchestrator setup right away.
If successful, the orchestrator will continuously monitor the evaluation queues specified by EVALUATION_TEMPLATES
for submissions with the status, RECEIVED. When it encounters a RECEIVED submission, it will run the workflow as specified by ROOT_TEMPLATE and update the submission status from RECEIVED to EVALUATION_IN_PROGRESS. The orchestrator will also upload logs and prediction files to the folder as specified by WORKFLOW_OUTPUT_ROOT_ENTITY_ID
. The folder will be structured like this:
Logs ├── submitteridA │ ├── submission01 │ │ └── submission01.zip │ ├── submission02 │ │ └── submission02.zip │ ... │ ├── submitteridA_LOCKED │ ├── submission01 │ │ └── predictions.csv │ ├── submission02 │ │ └── predictions.csv │ ... │ ...
If an error is encountered during any of the workflow steps, the orchestrator will update the submission status to INVALID and the workflow will stop. If, instead, the workflow finishes to completion, the orchestrator will update the submission status to ACCEPTED. Depending on how the workflow is set up (configured by step 10), participants may periodically be notified by email of their submission's progress.
For a visual reference, a diagram of the orchestrator and its interactions with Synapse is provided below:
Display a Submissions Dashboard (Optional)
12. Go to the staging site and click on the TABLES tab. Create a new submission view by clicking on Table Tools > Add Submission View. Under "Scope", add the evaluation queue(s) you are interested in monitoring (you may add more than one), then click Next. On the next screen, select which information to display, then click Save. A table of the submissions and their metadata is now available for viewing and querying.
Changes to the information displayed can be edited by going to the submission view, then clicking on Submission View Tools > Show Submission View Schema > Edit Schema.
Launch the Challenge (One-Time Only)
13. On the live site, go to the CHALLENGE tab and share the appropriate evaluation queues with the Participants team, giving them "Can submit" permissions.
14. Use the copyWiki
command provided by synapseutils to copy over all pages from the staging site to the live site. When using copyWiki
, it is important to also specify the destinationSubPageId
parameter. This ID can be found in the URL of the live site, where it is the integer following .../wiki/
, e.g. https://www.synapse.org/#!Synapse:syn123/wiki/123456
Example Script
import synapseclient import synapseutils syn = synapseclient.login() synapseutils.copyWiki( syn, "syn1234", # Synapse ID of staging site destinationId="syn2345", # Synapse ID of live site destinationSubPageId=999999 # ID following ../wiki/ of live site URL )
Important: After the initial copying, all changes to the live site should now be synced over with mirror-wiki
; DO NOT use copyWiki
again. See more on updating the wikis under the Update the Challenge section below.
Stop the Orchestrator
15. On the instance, enter: Ctrl + C
or cmd + C
followed by:
docker-compose down
If you are running the orchestrator in the background, skip the first step and simply enter:
docker-compose down
Note: docker-compose
must be run in the SynapseWorkflowOrchestrator/
directory. If you are not already in that directory, change directories first.
If the Challenge is currently active, but you need to stop the orchestrator for any reason, e.g. to make updates to the .env
file, it may be helpful to first check whether any submissions are currently being evaluated. If you are running the orchestrator in the background, you can monitor its activity by entering:
docker ps
If only one image is listed, e.g. sagebionetworks/synapse-workflow-orchestrator
, this will indicate that no submissions are currently running.
Otherwise, if you are not running the orchestrator in the background, read the logs on the terminal screen to determine whether there is current activity.
Wiki Setup
Use the following questions to help plan and set up the Challenge site and evaluation queues.
How many Challenge questions (“sub-challenges”) will there be?
Will the participants submit a single file/model to answer all sub-challenges or will they submit a different file/model per sub-challenge?
What is the general timeline for the Challenge?
Will there be challenge rounds? If so, how many?
Using rounds may help increase participation levels throughout the Challenge, as submission activity is usually high near the end of rounds/phases. It is best to have end dates during the mid-week if possible; this will ensure that there will be someone on-hand to help monitor and resolve issues should they arise.
Can users submit multiple submissions to a sub-challenge?
If so, should there be a limit in frequency? Examples: one submission per day, 3 submissions per week, 5 total, etc.
Setting a limit may help with potential overfitting as well as limit a user/team from monopolizing the compute resources.
What sort of submissions will the participants submit?
Common formats supported: prediction file (i.e. csv file), Docker image
When can the truth files (goldstandard) and training data (if any) be expected?
Will the data be released upon the challenge end? After the embargo? Never?
Is the data sensitive?
If so, will a clickwrap be needed? A clickwrap is an agreement between the participant and data provider that requires the participant to click a button agreeing to policies for data usage. Should log files be returned? Will there be a need to generate synthetic data?
Who will be responsible for providing/writing the validation and/or scoring scripts?
If Sage will be responsible, please provide as many details regarding the format of a valid predictions file (for example, number of columns, names of column headers, valid values, etc.) and all exceptional cases. For scoring, please provide the primary and secondary metrics, as well as any special circumstances for evaluations, i.e. CTD2 BeatAML primary metric is an average Spearman correlation, calculated from each drug’s Spearman correlation.
If not Sage will not be responsible, please provide the scripts in either Python or R. If needed, we do provide sample scoring models that you may use as a template, available in both Python and R.
Are scores returned to the participants immediately or should they be withheld until the Challenge end?
A typical Challenge will immediately return the scores in an email upon evaluation completion, however, there have been past Challenges that did not return scores until after the end date.
There is also an “hybrid” approach, in which scores are immediately returned during the Leaderboard/Testing Phase but withheld during the Final/Validation Phase (in which participants do not know their performance until after the Challenge end).
When should the evaluation results/leaderboard be accessible to the participants?
Some past Challenges had active leaderboards (i.e. participants could readily view their ranking throughout the evaluation round) whereas other Challenges did not release the leaderboards until the round/Challenge was over.
Regarding writeups: when will these be accepted?
Should participants submit their writeups during submission evaluations or after the Challenge has closed?
A writeup is a summary of all contributors, methods, scripts, code, and prediction files/Docker images that a team used for their submission. Writeups are required so that top-performing entries can be reproduced.
Update the Challenge
Challenge Site and Wikis
Any changes to the Challenge site and its wiki/sub-wiki contents must be done in the staging site, not live. To updating the site:
Make whatever changes needed to the staging Synapse project.
Use
challengeutils'
mirror-wiki
to push the changes to the live project.
Using the --dryrun
flag prior to officially mirroring can be helpful in ensuring that the pages to be updated are actually the ones intended. For example, an update is only made on the main wiki page of this particular Challenge, therefore, it is expected that only the first page will be updated:
Evaluation Queue Quotas
Updating an evaluation queue’s quota can be done in one of two ways:
On the web via Edit Evaluation Queue in Synapse.
In the terminal via
challengeutils
'set-evaluation-quota
(Coming Soon)
Web
To update the quota(s) on the web, go to the Synapse live site, then head to CHALLENGE tab. Edit the evaluation queues as needed.
For example:
A challenge has 4 rounds, each round has a limit of 3 submissions per submitter/team. To implement this quota, click Edit on the desired queue, click Add Round and input the Duration and Submission Limits.
A challenge lasts for 4 months with a daily submission limit of 3. To implement this quota, click Edit on the desired queue, click Add Round and input the Duration and click Advanced Limits to pick daily/weekly/monthly limits.
Terminal
Coming Soon
Workflow Steps
For any changes to the infrastructure workflow steps and/or scripts involved with the workflow (e.g. run_docker.py), simply make the edits to the scripts, then push the changes.
Note: dry-runs should always follow a change to the workflow; this will ensure things are still working as expected.
Interacting with Submissions
Throughout the challenge, participants will continuously submit to the evaluation queues. To manage continuous submissions, organizers can automate validation and scoring with the Synapse python client evaluation commands.
Revealing Submissions and Scores
Organizers can create leaderboards when scores are ready to be revealed to participants using /wiki/spaces/DOCS/pages/2011070739.
Submission views are sorted, paginated, tabular forms that can display submission data and annotations (e.g. scores from the scoring application and other metadata) and update as annotations or scores change. A submission view can provide real-time insight into the progress of a challenge
Learn more about adding leaderboards in the /wiki/spaces/DOCS/pages/1985151345.
Related Articles
/wiki/spaces/DOCS/pages/1985151441, /wiki/spaces/DOCS/pages/1985151345, /wiki/spaces/DOCS/pages/2011070739