submission of Docker repository

Description

submission is a specific commit, including a digest (sha256).

where 'entity' is the Synapse entity for a repo. It would be convenient if the user could specify just the repo name rather than retrieving the entity but since we support external repo's there is not a unique entity for a given repo name.

See also:
https://sagebionetworks.jira.com/browse/PLFM-4017

Environment

None

Activity

Show:
Thomas Yu
July 15, 2019, 11:22 PM

,

For a challenge,and need to submit Synapse docker repositories. I have a pull request for this in the python client which I will point them to:

Kimyen Truong
July 18, 2019, 1:38 AM

Thanks for doing the work. I will review the PR.

To help me determine whether if this ticket needs to be resolved in 2.0 or in 1.9.x, , , can you tell me a little more about this challenge? Is it running already? Do you have users using the Python/R client submitting to this challenge?

If it already running, then this must go to 1.9.x. If the challenge will start in the near future, can you let me know when it starts?
Thank you.

Brian White
July 19, 2019, 8:38 PM

and I discussed that this can wait for 2.0.

 

To answer her specific questions: the challenge is already running. But this only Andrew Lamb and I would use this functionality. We will likely want to do so within the next few weeks--but we can do so from the pull request.

 

This functionality is important to use because in several challenges I have seen the need to repeatedly re-submit models in batch. This can be tedious to do manually. Here are some specific examples:

  1. In the myeloma challenge, I implemented ~4 “baseline” models used in our dry runs. I would often tweak these and would need to re-run against each of the 2 or 3 sub-challenges. Running 4 x 3 = 12 models manually once is OK, but doing it repeatedly as I am modifying the models is very tedious. I probably did this 10 times.

  2. In the current deconvolution challenge, we are iteratively adding datasets to the leaderboard queue. I would like to run our 3-4 baseline models each time I add a dataset to the queue as a way to QC the dataset. It takes me a day or more to prepare a dataset for the queue and there are a potentially large number of datasets. I will stop collecting datasets for the queue once I have 5 or 6. So, I don’t want to take the time to prepare ~10-20 datasets, put them in the leaderboard queue, and then find that I have many more than I need. That would be a wasted effort. More efficient would be to test each as I add it to the queue. Once I get to 5 or 6, I can stop.

 

Kimyen Truong
July 19, 2019, 8:52 PM

, I really appreciate your response and the details. It’ve very helpful for me. Thank you!

Thomas Yu
March 19, 2020, 2:39 AM

Looks good.

Assignee

Thomas Yu

Reporter

Bruce Hoff

Labels

None

Validator

None

Development Area

None

Release Version History

None

Components

Fix versions

Priority

Major
Configure