Table of Contents | ||||||
---|---|---|---|---|---|---|
|
Infrastructure Setup
At Sage, we generally provision an EC2 Linux instance for a Challenge that leverages SynapseWorkflowOrchestrator to run CWL workflows. These workflows will be responsible for evaluating and scoring submissions (see model-to-data-challenge-workflow GitHub for an example workflow). If Sage is responsible for the cloud compute services, please give a general estimate of the computing power (memory, volume) needed. We can also help with the estimates if you are unsure.
What Can Affect the Computing Power
By default, up to ten submissions can be evaluated concurrently, though this number can be increased or decreased accordingly within the orchestrator's .env file. Generally, the more submissions you want to run concurrently, the more power will be required of the instance.
Example
Let’s say a submission file that is very large and/or complex will require up to 10GB of memory for evaluation. If a max of four submissions should be run at the same time, then an instance of at least 40GB memory will be required (give or take some extra memory for system processes as well), whereas ten concurrent submissions would require at least 100GB.
The volume of the instance will be dependent on variables such as the size of the input files and the generated output files. If running a model-to-data challenge, Docker images should also be taken into account. On average, participants will create Docker images that are around 2-4 GB in size, though some have reached up to >10 GB. (When this happens, we do encourage participants to revisit their Dockerfile and source code to ensure they are following best practices, as >10 GB is a bit high).
Sensitive Data
If data is sensitive and cannot leave the external site or data provider, please provide a remote server with (ideally) the following:
Support for Docker and, if possible,
docker-compose
If Docker is not allowed, then support for Singularity and Java 8 is a must
SynapseWorkflowOrchestrator repository
If Sage is not allowed access to the server, then it is the external site’s responsibility to get the Orchestrator running in whatever environment chosen. If Docker is not supported by the system, please let us know as we do have solutions for workarounds (e.g. using Java to execute, etc.).
Typical Infrastructure Setup Steps
1. Create a workflow infrastructure GitHub repository for the Challenge. For the orchestrator to work, the repository must be public.
We have created two templates One of the features to the Synapse.org platform is the ability to host a crowd-sourced challenge. Hosting challenges are a great way to crowd-source new computational methods for fundamental questions in systems biology and translational medicine.
Learn more about challenges and see examples of past/current projects by visiting Challenges and Benchmarking.
Running a challenge on Synapse will require creating a challenge space for participants to learn about the challenge, join the challenge community, submit entries, and view results. This article is aimed at challenge organizers, and will focus on:
Setting up the infrastructure for submission evaluation
Launching the challenge
Updating the challenge
Closing the challenge
Monitoring the submissions
...
Before You Begin
🖥️ Required Compute Power
At Sage Bionetworks, we generally provision an AWS EC2 Linux instance to run infrastructure of a challenge, leveraging the SynapseWorkflowOrchestrator to run CWL workflows. These workflows will be responsible for evaluating and scoring submissions.
To budget the cloud compute services: consider the computing power needed to validate and score the submissions (for example: memory, storage, GPU, network speed). If contracting with Sage Bionetworks, we can help estimate these costs as part of the Challenge budget.
📋 Using Sensitive Data as Challenge Data
Challenge data can be hosted on Synapse. If the data is sensitive (for example, human data), Synapse can apply access restrictions so that legal requirements are met before participants can access them. Contact the Synapse Access and Compliance Team (act@sagebase.org) for support with the necessary data access procedures for sensitive data.
🛑 Restricted Data Access
If data cannot leave the external site or data provider, it will be the data contributor’s responsibility to set up the challenge infrastructure. Contact the Challenges and Benchmarking team (cnb@sagebase.org) for consultations if needed.
To set up the infrastructure, you may follow Sage’s approach of using the SynapseWorkflowOrchestrator. The following will be required to use the orchestrator:
Support for Docker
(ideally) Support for
docker-compose
If Docker is not allowed, then support for Singularity and Java 8 will be a must
Note that the steps outlined in this article will assume the orchestrator will be used.
...
Challenge Infrastructure Setup
...
Requirements
One Sage account
Python 3.8+
(for local testing) CWL runner of choice, e.g. cwltool
Access to cloud compute services, e.g. AWS, GCP, etc.
Outcome
This infrastructure setup will continuously monitor the challenge’s evaluation queue(s) for new submissions. Once a submission is received, it will undergo evaluation including validation and scoring. All submissions will be downloadable to the challenge organizers, including the Docker image (if model-to-data challenge) and/or prediction files. Participants may periodically receive email notifications about their submissions (such as status, scores), depending on the infrastructure configurations.
Steps
1. Create a GitHub repository for the challenge workflow infrastructure. For the orchestrator to work, this repo must be public.
Two templates are available in Sage-Bionetworks-Challenges that you may use as a starting point. Their READMEs outline what will need to be updated within the scripts (under Configurations), but we will return to this later in Step 1012.
...
Workflow Template | Submission Type |
---|---|
...
Flat files, like CSV files | |
...
Docker images |
...
2. Create the Challenge site on Synapse. This can easily be done with challengeutils
(Installation instructions herechallenge space on Synapse with challengeutils' create-challenge
:
Code Block | ||
---|---|---|
| ||
challengeutils create-challenge "challenge_name" |
...
This command will create two Synapse Projects: one staging site and one live site. You may think of them as development and production, in that all edits must be done in the staging site, NOT live. Changes to the live site will instead be synced over with challengeutils
' mirror-wiki
(more on this under Update the Challenge).
Note: at first, the live site will be just one page where a general overview about the Challenge is provided. There will also be a pre-register button that Synapse users can click on if they are interested in the upcoming Challenge:
...
For the initial deployment of the staging site to live, use synapseutils
' copyWiki
command, NOT mirror-wiki
(more on this under Launch the Challenge).
create-challenge
will also create four Synapse Teams for the Challenge: * Preregistrants, * Participants, * Organizers, and * Admin, where * is the Challenge name. Add Synapse users to the Organizers and Admin teams as requiredprojects:
Staging - Organizers will use this project during challenge planning and development to share files and draft the wiki content.
create-challenge
will intialize the wiki with the DREAM Challenge Wiki Template.Live - Organizers will use this project as the pre-registration page during challenge development. When the challenge is ready for launch, the project will then be replaced with the contents from staging.
We encourage you to use the staging project to make all edits and preview them before officially pushing the updates over to the live project.
Panel | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
See Update the Challenge below to learn more about syncing changes from staging to live. |
create-challenge
will also create four Synapse teams for the challenge:
Pre-registrants - This team is used when the challenge is under development. It allows interested Synapse users to join a mailing list to receive notification of challenge launch news.
Participants - Once the challenge is launched, Synapse users will join this team in order to download the challenge data and make submissions.
Organizers - Synapse users added to this team will have the ability to share files and edit wikis on the staging project. Add users as needed.
Admin - Synapse users added to this team will have administrator access to both the live and staging projects. Organizers do not need to be Administrators. Ideally, all admins must have a good understanding of Synapse. Add users as needed.
3. On the live siteproject, go to the CHALLENGE Challenge tab and create as many Evaluation Queues evaluation queues as needed , e.g. one per sub-challenge, etc. (for example, one per question/task) by clicking on Challenge Tools > Create Evaluation Queue. By default, create-challenge
will create an Evaluation Queue for writeups, which you will already see listed here.
...
one evaluation queue by default.
Panel | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
The 7-digits in the parentheses following each |
...
evaluation queue name is |
...
the evaluation ID. You will need these |
...
ID(s) later in Step 11. |
...
4. While still on the live siteproject, go to the FILES Files tab and create a new Folder folder called "Logs" “Logs” by clicking on Files Tools > Add New Folder.
...
the add-folder icon:
...
Panel | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
This folder will contain the participants' submission logs and prediction files |
...
(if any). Make note of its |
...
synID for use later |
...
in Step |
...
11. |
5. On the staging siteproject, go to the FILES Files tab and create a new File by clicking on Files Tools > Upload or Link to a File > click on the upload icon to Upload or Link to a File:
...
6. In the pop-up window, switch tabs to Link to URL.
For "URL"“URL”, enter the link web address to the zipped download of the workflow infrastructure repositoryrepo. You may get this address by going to the repository repo and clicking on Code > right-clicking Download Zip > Copy Link Address:
...
Name the File whatever you like (we generally use "workflow"), then hit Click Save.
...
Panel | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
This file will be what links the |
...
evaluation queue to the orchestrator |
...
. Make note of its |
...
synID for use later |
...
in Step |
...
11. |
67. Add an Annotation annotation to the File file called ROOT_TEMPLATE
by clicking on Files Tools > Annotations > Edit. The "Value" will be the path . This annotation will be used by the orchestrator to determine which file among the repo is the workflow script. Click on the annotations icon, followed by Edit:
...
8. For “Value”, enter the filepath to the workflow script , written as:
{infrastructure workflow repo}-{branch}/path/to/workflow.cwl
For example, this is the path to workflow.cwl
of the model-to-data template repo:
as if you had downloaded the repo as a ZIP. For example, model-to-data-challenge-workflow would be downloaded and unzipped as model-to-data-challenge-workflow-main
and the path to the workflow script is workflow.cwl
:
...
In this example, “Value” will be model-to-data-challenge-workflow-main/workflow.cwl
Important: the ROOT_TEMPLATE annotation is what the orchestrator uses to determine which file among the repo is the workflow script.
7. For the most part, “Value” should look something like this:
{name of repo}-{branch}/workflow.cwl
9. Create a cloud compute environment with the required memory and volume specifications. Once it spins up, then log into the instance and clone the orchestrator:
Code Block | ||
---|---|---|
| ||
git clone https://github.com/Sage-Bionetworks/SynapseWorkflowOrchestrator.git |
Follow the the instance.
If you are a Sage employee: you can follow our internal instructions on /wiki/spaces/CHAL/pages/2806087732 here.
If you are not a Sage employee: follow the instructions listed under "Setting up linux environment"
...
to install and run Docker
...
as well as
docker-compose
onto the compute environment of choice.
810. While still on On the instance, change clone the SynapseWorkflowOrchestrator repo if it’s not already available on the machine. Change directories to SynapseWorkflowOrchestrator/
and create a copy of the .envTemplate
file as .env
(or simply rename it to .env
):
Code Block |
---|
cd SynapseWorkflowOrchestrator/ cp .envTemplate .env |
911. Open .env
and enter values for the following property config variables:
Property | Description | Example |
| Synapse credentials under which the orchestrator will run. The provided user must have access to the |
evaluation queue(s) being serviced. |
|
|
Password for |
This can be found under My Dashboard > Settings. |
|
|
synID for "Logs" |
folder. Use the |
synID from Step 4. |
|
| JSON map of evaluation IDs to the workflow repo archive, where the key is the evaluation ID and the value is the link address to the archive. Use the evaluation IDs from Step 3 as the key(s) and the |
synIDs from Step 5 as the value(s). |
|
"9810679": "syn456"}
...
|
Panel | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
Refer to the "Running the Orchestrator with Docker containers" |
...
README section for |
...
additional configuration options. |
1012. Return to Clone the workflow infrastructure repository and clone it onto your local machine. Open the repo in your editor of choice and repo. Using a text editor or IDE, make the following edits updates to the following scripts:
Expand | ||
---|---|---|
| ||
|
...
|
...
|
...
|
...
|
...
|
...
|
...
|
...
|
...
|
...
|
...
TRUE
|
Expand | ||
---|---|---|
| ||
|
...
|
...
|
...
TRUE
...
Update synapseid
to the Synapse ID of the Challenge's goldstandard
...
TRUE
...
Set errors_only
to false
if an email notification about a valid submission should also be sent
(2 steps: email_docker_validation
, email_validation
)
...
FALSE
...
Provide the absolute path to the data directory, denoted as input_dir
, to be mounted during the container runs.
...
TRUE
...
Set store
to false
if log files should be withheld from the participants
...
FALSE
...
Add metrics and scores to private_annotations
if they are to be withheld from the participants
...
FALSE
validate.cwl
...
Update the base image if the validation code is not Python
...
FALSE
...
Remove the sample validation code and replace with validation code for the Challenge
...
TRUE
...
score.cwl
...
Update the base image if the validation code is not Python
...
FALSE
...
Remove the sample scoring code and replace with scoring code for the Challenge
...
TRUE
Push the changes up to GitHub when done.
11. On the instance, change directories to SynapseWorkflowOrchestrator/
and kick-start the orchestrator with:
Code Block |
---|
docker-compose up |
...
|
Push the changes up to GitHub when done.
13. On the instance, change directories to SynapseWorkflowOrchestrator/
and kick-start the orchestrator with:
Code Block |
---|
docker-compose up -d |
where -d
will run orchestrator in the background. This will allow you to exit the instance without terminating the orchestrator.
Note |
---|
If |
14. To make changes to the .env
file (such as updating the number of concurrent submissions), stop the orchestrator with:
Code Block |
---|
docker-compose up -d |
Note: it may be helpful to not run the orchestrator in detached mode at first, so that you will be made aware of any errors with the orchestrator setup right away.
...
down |
Once you are done making updates, save the file and restart the orchestrator to apply the changes.
Log Files
As it’s running, the orchestrator will upload logs and prediction files to the “Logs” folder. For each user or team that submits to the challenge, two folders will be created:
<submitterid>/
<submitterid>_LOCKED/
where Docker and TOIL logs are uploaded to <submitterid>/
, and prediction files are uploaded to <submitterid>_LOCKED/
. Note that the LOCKED folders will not be accessible to the participants, in order to prevent data leakage.
The directory structure of “Logs” will look something like this:
Code Block |
---|
Logs ├── submitteridA │ ├── submission01 │ │ ├── submission01_log.txt │ │ └── submission01_logs.zip │ ├── submission02 │ │ ├── submission02_log.txt │ │ └── submission02_logs.zip │ ... │ ├── submitteridA_LOCKED │ ├── submission01 │ │ └── predictions.csv │ ├── submission02 │ │ └── predictions.csv │ ... │ ... |
If an error is encountered during any of the workflow steps, the orchestrator will update the submission status to INVALID and the workflow will stop. If, instead, the workflow finishes to completion, the orchestrator will update the submission status to ACCEPTED. Depending on how the workflow is set up (configured by step 10), participants may periodically be notified by email of their submission's progress.
For a visual reference, a diagram of the orchestrator and its interactions with Synapse is provided below:
...
Display a Submissions Dashboard (optional)
12. Go to the staging site and click on the TABLES tab. Create a new Submission View by clicking on Table Tools > Add Submission View. Under "Scope", add the Evaluation Queue(s) you are interested in monitoring (you may add more than one), then click Next. On the next screen, select which information to display, then hit Save. A Synapse table of the submissions and their metadata is now available for viewing and querying.
Changes to the information displayed can be edited by going to the Submission View, then clicking on Submission View Tools > Show Submission View Schema > Edit Schema.
Launch the Challenge (one-time only)
13. On the live site, go to the CHALLENGE tab and share the appropriate Evaluation Queues with the Participants team, giving them "Can submit" permissions.
14. Use the copyWiki
command provided by synapseutils to copy over all pages from the staging site to the live site. When using copyWiki
, it is important to also specify the destinationSubPageId
parameter. This ID can be found in the URL of the live site, where it is the integer following .../wiki/
, e.g. https://www.synapse.org/#!Synapse:syn123/wiki/123456
...
. |
...
Launch the Challenge
Requirements
One Sage account
Python 3.7+
Important TODOs:
Info |
---|
Following steps 1-5 should be done in the live project. |
Before proceeding with the launch, we recommend contacting Sage Governance to add a clickwrap for challenge registration. With a clickwrap in-place, interested participants can only be registered if they agree to the terms and conditions of the challenge data usage.
If you are a Sage employee: submit a Jira ticket to the Governance board with the synID of the live project, as well as the team ID of the participants team.
Share all needed evaluation queues with the participants team with
Can submit
permissions. Once the challenge is over, we recommend updating the permissions toCan view
to prevent late submissions.We also recommend sharing the evaluation queues with the general public so that the leaderboards are openly accessible.
After the challenge is launched, create a folder called “Data” and update its Sharing Settings. Share the “Data” folder with the participants team only. Do not make the folder public or accessible to all registered Synapse users. The sharing settings of the “Data” folder should look something like this:
Upload any challenge data that is to be provided to the participants to the “Data” Folder. DO NOT UPLOAD DATA until you have updated its sharing settings.
To launch the Challenge, that is, to copy the wiki pages of the staging project over to the live project, use synapseutils' copyWiki()
in a Python script.
For example:
Code Block | ||
---|---|---|
| ||
import synapseclient import synapseutils syn = synapseclient.login() synapseutils.copyWiki( syn, "syn1234", # Synapse IDsynID of staging site destinationId="syn2345", # Synapse IDsynID of live site destinationSubPageId=999999 # ID following ../wiki/ of live site URL ) |
...
project URL
) |
When using copyWiki
, it is important to specify the destinationSubPageId
parameter. This ID can be found in the URL of the live project, where it is the number following .../wiki/
.
Once copyWiki
has been used once, DO NOT RUN IT AGAIN!
Once the wiki has been copied over, all changes to the live
...
project should now be synced
...
with
...
challengeutils' mirrow-wiki
...
Stop the Orchestrator
15. On the instance, enter: Ctrl + C
or cmd + C
followed by:
Code Block |
---|
docker-compose down |
If you are running the orchestrator in the background, skip the first step and simply enter:
Code Block |
---|
docker-compose down |
Note: docker-compose
must be run in the SynapseWorkflowOrchestrator/
directory. If you are not already in that directory, change directories first.
Note that if the Challenge is currently active but you need to stop the orchestrator (e.g. to make updates to the .env
file, etc.), it may be helpful to first check whether any submissions are currently being evaluated. If you are running the orchestrator in the background, you can monitor its activity by entering:
Code Block |
---|
docker ps |
If only one image is listed, e.g. sagebionetworks/synapse-workflow-orchestrator
, this will indicate that no submissions are currently running.
Otherwise, if you are not running the orchestrator in the background, read the logs on the terminal screen to determine whether there is currently activity.
Wiki Setup
Use the following questions to help plan and set up the Challenge site and Evaluation Queues.
How many Challenge questions (“sub-challenges”) will there be?
Will the participants submit a single file/model to answer all sub-challenges or will they submit a different file/model per sub-challenge?
What is the general timeline for the Challenge?
Will there be rounds? If so, how many?
Using rounds may help increase participation levels throughout the Challenge, as submission activity is usually high near the end of rounds/phases. It is best to have end dates during the mid-week if possible; this will ensure that there will be someone on-hand to help monitor and resolve issues should one arise.
Can users submit multiple submissions to a sub-challenge?
If so, should there be a limit in frequency? Examples: one submission per day, 3 submissions per week, 5 total, etc.
Setting a limit may help with potential overfitting as well as limit a user/team from monopolizing the compute resources.
What sort of submissions will the participants submit?
Common formats supported by Sage: prediction file (i.e. csv file), Docker image
When can the truth files (goldstandard) and training data (if any) be expected?
Will the data be released upon the challenge end? After the embargo? Never?
Is the data sensitive?
If so, will a clickwrap be needed (an agreement between the participant and data provider that requires the former to click a button that they will agree to the policies put in place regarding data usage)? Should log files be returned? Will there be a need to generate synthetic data?
Who will be responsible for providing/writing the validation and/or scoring scripts?
If Sage, please provide as many details regarding the format of a valid predictions file (e.g. number of columns, names of column headers, valid values, etc.) and all exceptional cases. For scoring, please provide the primary and secondary metrics, as well as any special circumstances for evaluations, i.e. CTD2 BeatAML primary metric is an average Spearman correlation, calculated from each drug’s Spearman correlation.
If not Sage, please provide the scripts in either Python or R. If needed, we do provide sample scoring models that you may use as a template, available in both Python and R.
Are scores returned to the participants immediately or should they be withheld until the Challenge end?
A typical Challenge will immediately return the scores in an email upon evaluation completion, however, there have been past Challenges that did not return scores until after the end date.
There is also an “hybrid” approach, in which scores are immediately returned during the Leaderboard/Testing Phase but withheld during the Final/Validation Phase (in which participants do not know their performance until after the Challenge end).
When should the evaluation results/leaderboard be accessible to the participants?
Some past Challenges had active leaderboards (i.e. participants could readily view their ranking throughout the evaluation round) whereas other Challenges did not release the leaderboards until the round/Challenge was over.
Regarding writeups: when will these be accepted?
Should participants submit their writeups during submission evaluations or after the Challenge has closed?
A writeup is something we require of all participants in order to be considered for final evaluation and ranking. Within a writeup should be all contributing persons, a thorough description of their methods and usage of data outside of the Challenge data, as well as all of their scripts, code, and predictions file(s)/Docker image(s). We require all of these so that, should they be a top-performer, we can ensure their code and final output is reproducible.
Update the Challenge
Challenge Site
Wikis
Any changes to the Challenge site and its Wiki/sub-Wiki contents must be done in the staging site, not live. Steps for updating the site is outlined below:
...
Make whatever changes needed to the staging Synapse Project.
.
Panel | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
Learn more about how to Update the Challenge below. |
...
Monitor the Submissions
As challenge organizers, we recommend creating a Submission View to track and monitor submissions as they come in. This table will especially be useful when participants need help with their submissions.
Panel | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
Learn more about revealing scores and adding leaderboards in Evaluation Queues. |
Steps
1. Go to the staging project and click on the Tables tab. Create a new Submission View by clicking on Add New… > Add Submission View.
2. Under "Scope", add evaluation queue(s) you are interested in monitoring. More than one queue can be added. Click Next. On the following screen, select which information to display - this is known as the schema.
We recommend the following schema for monitoring challenge submissions:
Column Name | Description | Facet values? |
---|---|---|
| Evaluation ID (evaluation ID, but rendered as evaluation name) – recommended for SubmissionViews with multiple queues in scope | Recommended |
| Submission ID | |
| Date and time of the submission (in Epoch, but rendered as | |
| User or team who submitted (user or team ID, but rendered as username or team name) | Recommended |
| Docker image name – recommended for model-to-data challenges | Not recommended |
| Docker SHA digest – recommended for model-to-data challenges | Not recommended |
| Workflow status of the submission (one of [ | Recommended |
| Evaluation status of the submission (one of [None, | Recommended |
| (if any) Validation errors for the predictions file | Not recommended |
| synID to the submission’s logs folder | Not recommended |
| synID to the predictions file (if any) | Not recommended |
(any annotations related to scores) | Submission annotations - names used depends on what annotations were used in the scoring step of the workflow |
Info |
---|
The highlighted columns would need to be added manually by clicking the + Add Column button at the bottom of the Edit Columns window. |
3. Click Save. A table of the submissions and their metadata will now be available for viewing and querying. Changes to the information displayed can be edited by clicking on the schema icon, followed by Edit Schema:
...
Update the Challenge
Updating Existing Wikis
1. Go to the staging project and navigate to the page(s) you wish to edit. Click on the pencil icon to Edit Project Wiki:
...
Make edits as needed, then click Save.
2. Use challengeutils' mirror-wiki
to push the changes to the live project:
Code Block |
---|
challengeutils mirror-wiki staging_synid live_synid [--dryrun] |
Use --dryrun
to optionally preview which pages will be updated prior to doing an official sync.
Adding a New Wiki Page
1. Go to the staging project, click on Wiki Tools > Add Wiki Subpage. Enter a page title, then click OK. A new page should now be available.
2. On the new page, click on the pencil icon to Edit Project Wiki:
...
Add the page content, then click Save.
3. Go to the live project and create a new wiki page with the same name as the new page in the staging project. mirror-wiki
depends on the page titles to be the same for synchronization.
4. Use challengeutils' mirror-wiki
to push the changes
...
Note: using the --dryrun
flag prior to officially mirroring can be helpful in ensuring that the pages to be updated are actually the ones intended. For example, an update is only made on the main Wiki page of this particular Challenge, therefore, it is expected that only the first page will be updated:
...
Evaluation Queue Quotas
Updating an Evaluation Queue’s quota can be done in one of two ways:
On Synapse via Edit Evaluation Queue.
In the terminal via
challengeutils
'set-evaluation-quota
:
...
Example
The deadline date for the first round of a Challenge has been shifted from 12 January 2020 to 9 February 2021. The queues are currently implementing a "1 submission per day" limit, therefore, the Number of Rounds will need to be updated, NOT Round Duration. This is because a "round" will need to stay as one day long (86400000 Epoch milliseconds), so that Synapse can enforce the Submission Limit of 1 per day.
Note: if there is no daily (or weekly) submission limit, then updating the Round Duration would be appropriate. For example, the final round of a Challenge has a total Submission Limit of 2, that is, participants are only allowed two submissions during the entire phase. A "round", this time, is considered to be the entire phase, so updating Round Duration (or end_date
when using set-evaluation-quota
) will be the appropriate step to take when updating the deadline for the queue(s).
...
Updating with challengeutils
' set-evaluation-quota
is more or less the same (except round_duration
must be given as Epoch milliseconds):
...
blank values will replace any existing quotas if they are not set during this command
...
Workflow Steps
For any changes to the infrastructure workflow steps and/or scripts involved with the workflow (e.g. run_docker.py), simply make the edits to the scripts, then push the changes.
...
to the live project:
Code Block |
---|
challengeutils mirror-wiki staging_synid live_synid [--dryrun] |
Use --dryrun
to optionally preview which pages will be updated prior to doing an official sync.
Extending the Deadline
It is not unheard of for there to be a change in the submission deadline. To extend the submission deadline date/time, you can either:
edit the Round End of an existing round; or
add a new round that will immediately start after the current one (note that this approach will reset everyone’s submission limit)
We also recommend notifying the participants of any changes to the timeline by posting an announcement.
Panel | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
Learn more posting announcements in Utilizing the Discussion Board. |
Updating the Workflow
For any changes to the CWL scripts or run_docker.py, make edits as needed to the scripts, then push the changes. We highly recommend conducting dryruns immediately after, so that errors are addressed in a timely manner.
Evaluation Docker Image
If your workflow is using a Docker image in validate.cwl
and/or score.cwl
, and updates were made, pull the latest changes on the instance with:
Code Block |
---|
docker pull <image name>:<version> |
...
Close the Challenge
Important TODOs:
Registration: remove/hide all "Register" buttons from the challenge site
Pages to search through: main page, Participation Overview, Submission Tutorial(s)
Past challenge as reference:
You can replace the registration button with an alert well like this:
...
Code Block | ||
---|---|---|
| ||
<div class="alert alert-info">
<h4><strong>Registration is closed.</strong></h4>
<p>Thank you to everyone who joined the challenge!</p>
</div> |
Data access: disallow users from joining the participant team, thus barring them from accessing the challenge data after the challenge has completed
Go to Participants team page
Click on Team Actions button > Edit Team
Under Access, select “Team is locked, users may not join or request access. New users must be invited by a team manager.”
Evaluation queues: update the participants team’s permissions from
Can submit
toCan view
Writeups: link writeups to the final submissions
Panel | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
See Collecting Writeups for more information. |
Instances: stop and terminate all cloud compute instances for the challenge.
...
🌟 For additional assistance or guidance, contact the Challenges and Benchmarking team at cnb@sagebase.org.