Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Setting up the Challenge infrastructure

  • Launching the Challenge space

  • Setting up the Challenge wiki

  • Updating the Challenge

  • Interacting with the submitted entries

Before You Begin

Computing Power

At Sage Bionetworks, we generally provision an EC2 Linux instance for a challenge that leverages SynapseWorkflowOrchestrator to run CWL workflows.  These workflows will be responsible for evaluating and scoring submissions (see model-to-data-challenge-workflow GitHub for an example workflow).  If Sage Bionetworks is responsible for the cloud compute services, please give a general estimate of the computing power (memory, volume) needed.  We can also help with the estimates if you are unsure.

...

The volume of the instance will be dependent on variables such as the size of the input files and the generated output files.  If running a model-to-data challenge, Docker images should also be taken into account.  On average, participants will create Docker images that are around 2-4 GB in size, though some have reached up to >10 GB.  (When this happens, we do encourage participants to revisit their Dockerfile and source code to ensure they are following best practices, as >10 GB is a bit high).

Using Sensitive Data

Synapse has the ability to apply access restrictions to sensitive data (e.g. human data), so that legal requirements are met before participants can access it. If human data are being used in the Challenge, or if you have any questions about the sensitivity of the Challenge data, contact the Synapse Access and Compliance Team (act@sagebase.org) for support with the necessary data access procedures.

...

If Sage is not allowed access to the server, then it is the external site’s responsibility to get the orchestrator running in whatever environment chosen.  If Docker is not supported by the system, please let us know as we do have solutions for workarounds (e.g. using Java to execute, etc.).

Challenge Infrastructure Setup Steps

1. Create a workflow infrastructure GitHub repository for the Challenge. For the orchestrator to work, the repository must be public.

...

You may think of these two projects as development (staging project) and production (live project), in that all edits must be done in the staging site, NOT the live site. Maintenance of both projects enables wiki content to be edited and previewed in the staging project before the content is published to the live project. Changes to the live site are synced over with challengeutils' mirror-wiki (see Update the Challenge for more).

Info

Note: At first, the live site will be just one page where a general overview about the Challenge is provided.  There will also be a pre-register button that Synapse users can click on if they are interested in the upcoming Challenge.

...

For the initial deployment of the staging site to live, use synapseutils' copyWiki command, NOT mirror-wiki (more on this under Launch the Challenge).

The create-challenge command will also create four /wiki/spaces/DOCS/pages/1985446029 for the Challenge:

...

For a visual reference, a diagram of the orchestrator and its interactions with Synapse is provided below:

...

Display a Submissions Dashboard (Optional)

12. Go to the staging site and click on the TABLES tab.  Create a new submission view by clicking on Table Tools > Add Submission View.  Under "Scope", add the evaluation queue(s) you are interested in monitoring (you may add more than one), then click Next.  On the next screen, select which information to display, then click Save.  A table of the submissions and their metadata is now available for viewing and querying

Changes to the information displayed can be edited by going to the submission view, then clicking on Submission View Tools > Show Submission View Schema > Edit Schema.

Launch the Challenge (One-Time Only)

13. On the live site, go to the CHALLENGE tab and share the appropriate evaluation queues with the Participants team, giving them "Can submit" permissions.

...

Note

Important: After the initial copying, all changes to the live site should now be synced over with mirror-wiki; DO NOT use copyWiki again.  See more on updating the wikis under the Update the Challenge section below.

Stop the Orchestrator

15. On the instance, enter: 
Ctrl + C  or cmd + C

...

Otherwise, if you are not running the orchestrator in the background, read the logs on the terminal screen to determine whether there is current activity.

Wiki Setup

Use the following questions to help plan and set up the Challenge site and evaluation queues.

...

A writeup is a summary of all contributors, methods, scripts, code, and prediction files/Docker images that a team used for their submission. Writeups are required so that top-performing entries can be reproduced.

Update the Challenge

Challenge Site and Wikis

Any changes to the Challenge site and its wiki/sub-wiki contents must be done in the staging site, not live.  To updating the site:

...

Using the --dryrun flag prior to officially mirroring can be helpful in ensuring that the pages to be updated are actually the ones intended.  For example, an update is only made on the main wiki page of this particular Challenge, therefore, it is expected that only the first page will be updated:

...

Evaluation Queue Quotas

Updating an evaluation queue’s quota can be done in one of two ways:

...

  • A challenge has 4 rounds, each round has a limit of 3 submissions per submitter/team. To implement this quota, click Edit on the desired queue, click Add Round and input the Duration and Submission Limits.

  • A challenge lasts for 4 months with a daily submission limit of 3. To implement this quota, click Edit on the desired queue, click Add Round and input the Duration and click Advanced Limits to pick daily/weekly/monthly limits.

Terminal

Coming Soon

Workflow Steps

For any changes to the infrastructure workflow steps and/or scripts involved with the workflow (e.g. run_docker.py), simply make the edits to the scripts, then push the changes.

Info

Note: dry-runs should always follow a change to the workflow; this will ensure things are still working as expected.

Interacting with Submissions

Throughout the challenge, participants will continuously submit to the evaluation queues. To manage continuous submissions, organizers can automate validation and scoring with the Synapse python client evaluation commands.

Revealing Submissions and Scores

Organizers can create leaderboards when scores are ready to be revealed to participants using /wiki/spaces/DOCS/pages/2011070739.

...

Learn more about adding leaderboards in the /wiki/spaces/DOCS/pages/1985151345.

/wiki/spaces/DOCS/pages/1985151441, /wiki/spaces/DOCS/pages/1985151345, /wiki/spaces/DOCS/pages/2011070739

...