Document toolboxDocument toolbox

Challenge Platform 2.0 Stories

Users

  • Challenge Organizers - A superset of roles that develop and administer individual challenges

    • Challenge Initiator - An individual who wants to create/host a challenge. This user is the person who will create the challenge page and write the challenge materials. They will also engage with the challenge community to generate interest and participation. This user is most likely the driving energy behind a challenge and may be a funder, data provider, scientific lead, or project manager.

    • Challenge Reviewer - This user is responsible for reviewing the draft challenge pages prior to deploying them to production. This may be the PI on the challenge project. They are one of the Challenge Organizers.

    • Challenge Project Manager - This user is responsible for coordinating between all the other Challenge Organizers.

    • Challenge Scientific Lead - This user is one of the Challenge Organizers and is responsible for developing the scientific question and evaluation methods for the challenge.

    • Challenge Infrastructure Lead - This user is responsible for building and managing the infrastructure that runs the challenge. This may be remote servers that run the orchestrator tool, or they may be AWS environments running the orchestrator. This may involved coordinating between multiple institutions to run multiple remote servers.

    • Challenge Data Provider - This user is the point of contact at the organization/lab providing data for the challenge.

    • Challenge Funder - The organization(s) funding the challenge. They are recognized by attaching their logo to the Challenge Banner and in the Challenge description. Some members of the funding organization(s) often serve as Data Providers or Scientific Leads.

    • Challenge Governance Liaison - This user is a contact on the Sage governance team who is responsible for review the governance policies of the challenge. They will also be responsible for creating click wraps and gathering signatures for the different MOUs, DTAs, or DURs.

  • Challenge Participant - This user is the researcher, student, or data scientist taking part in the challenge. This user needs to download data, create models, and submit either a prediction file, a docker image, or some other type of submission. This is a broad role with some special cases:

    • Benchmarking Participant - Much like a challenge participant, this user is the researcher, student, or data scientist taking part in a benchmarking project.

    • Prospective Challenge Participant - This user is a potential challenge participant who needs to be able to explore current, past, and future challenges. They would need to know what the different questions were, what data was available, what the relevant domain expertise of the challenge was.

    • Challenge Team Owner - A Challenge Participant with privileges over Team membership, or an individual (a team of one)

    • Challenge Team Member - A Challenge Participant who coordinates submissions with other members of a Team.

    • Challenge Team Observer - A Challenge Participant who can view the submissions and scores of a Team, but cannot make submissions on behalf of the Team. For example, a Teacher may be an observer for several Teams of students.

  • Challenge Archivist - This user is responsible for archiving the final challenge, and ensuring that metadata is accessible.

  • Organization Manager - This user is the manager for the organization’s account on the challenge platform.

  • Challenge Observer - A user who wishes to browse challenges to understand current activity in the space. Challenge Observers may or may not have login credentials and may have no interest in participating in a challenge, only in reading the public information about challenges.

    • Challenge Results Consumer - This is a challenge participant or observer who is checking the results of the Challenge. They will view the results of a Challenge during open submissions (if permitted by the Challenge settings) or after a Challenge has completed.

Responsibilities

  • Data quality control

  • Evaluation metric design

  • Discussion board monitoring

  • Infrastructure dry runs

  •  

Narratives

Challenge Creation

  • Propose a Challenge

I have some new data. I want to start to a challenge. This is my question. Help me.

  • Create challenge landing page draft

  • Submit challenge landing page draft for review

  • Review challenge landing page draft

  • Create challenge data

  • Identify challenge infrastructure needs

  • Identify challenge questions

  • Identify challenge evaluation metrics

  • Work with governance to establish the data terms of use

Infrastructure installation and management

  • Initializing the orchestrator workflow engine in a Sage provisioned cloud (AWS, GCP, Azure) environment.

  • Initializing the orchestrator workflow engine in a remote server provisioned by a collaborating institution.

  • Initializing the orchestrator workflow engine on multiple, federated remote servers provisioned by multiple collaborating institutions.

Challenge Management

  • Challenge Launch

Challenge Completion

  • Post Challenge analysis

  •  

Completing and publishing the challenge results after the completion of the challenge.

After a challenge is complete and the winners have been established the challenge initiator or challenge project manager click the “Archive Challenge“ button. A popup form prompts the user to fill out a few details including the synapse_id of the winning team(s), the synapse_id of the winning team(s) project wiki write up, and the submission_id of the winning submission or a link to the winning code github link. After clicking confirm, a prompt asks the user if they are sure they want to archive the challenge. Upon confirmation, the platform closes all queues, updates the status of the challenge to complete, creates a final results page where the leaderboard, the winning team, the winning team’s method write up, and the winning submission (link to docker container with instructions for downloading, link to github repo with winning model, etc.) are displayed along with relevant visualizations of the overall performance of the challenge results (aggregated AUROC curves, calibrations curves, etc.). A table of links to other teams' write ups is also published that includes the team names and their scores. The results web page is editable by all challenge organizers and can be edited at any time. After the flagship manuscript of the challenge is published, the doi to the paper is also added under a “Challenge publications“ section.

  •  

Challenge Participation

  • Registering for a challenge

  • Forming a team

  • Forming a single person team

  • Forming a team as a professor for a class

  •