Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

This is a high-level outline of the types of computational work we are aiming to support with Synapse, and how some typical operations map against our API.  The goal is an understanding of the user conceptual model of the system and his/her benefits gained by using the system, not an exhaustive summary of all functions needed to support this work.  See also Entities, Files, and Folders Oh My!

Ad hoc analysis - private & exploratory

Alice is an data analyst / computational scientist starting a new project.  She creates a folder in her local linux home directory and populates it with a set of files (e.g. starting raw data) obtained from Bob, her biologist friend.  She starts some exploratory statistical analysis in an interactive R session. Initially she doesn't know when or even if she will find anything of interest, so there's probably a period of time where Synapse is not involved at all.  After some time she arrives at some preliminary findings she wants to at least remember for her own future reference.  At this point she creates a new Synapse project, and adds a local folder to it.  Command line interaction with Synapse might look something like:

...

  • Synapse as dashboard of all her projects, regardless of where the data is living or who the collaborators are (this is likely one of many projects she is switching among)
  • Wiki as notebook for future self.
  • Annotations to enhance ability to later find the data / project if it goes dormant for a while.
  • Ability to easily move projects between different work environments (PC, shared computational servers, cloud)

Ad hoc analysis - Collaborative

After some time she arrives at some preliminary findings she wants to share with her collaborator Bob (more of a biologist).  She adds Bob to the project and emails him a link to view the results.  Bob is able to review Alice's findings, comment on the wiki pages.  He's got some new data he wants to share with Alice so he uploads it to the project from the web client.  Alice receives a notification (via configurated email notifications, or project activity history, etc). Alice is able to pull the files down to her local environment and continue working.

...

  • Authorization controls over project contents
  • Synchronize files among multiple environments with parallel concurrent use (different instution's in house systems, cloud offerings, etc)
  • Shared online collaborative workspace to pull key findings together from multiple people and document project status.

Reproducible Ad hoc analysis

After some time, Alice has a result she believes is important and will eventually form part of a paper, and she wants to make sure Carl can see exactly what she did.  At this point she builds a set of R scripts which process the data though a series of steps.  She stores the scrips in a GitHub repository associated with the project.  She also uses a few bioinformatics tools installed on her local system from the command line of linux as part of her process.  Now, she re-runs the analysis, this time recording what she did using Synapse provenance features to link all the files starting with raw data through all intermediate results and ending with a set of figures, vectors, and other output data.  All this can be pushed up to Synapse as before, but now there is a graphical representation of her process available in Synapse that Carl can use to review her work, including links to the code and tools she used.  (Command line client would need to push up the commands used to run tools at the linux command line).  If Carl and Alice are working on the same system, access to the code or commands to execute system programs should give Carl a pretty good idea of exactly what Alice did, and she can provide additional commentary in the wiki and/or edit the provenance records to provide more details (e.g. version info for some of the tools she used).

...

  • Ease of additional people to step and in review / contribute to work
  • Move project towards publicly publishable state.

Pipelined Analysis

It turns out that Alice's paper is a hit and now she has lots of biologists asking for help running similar analyses on different data sets. She converges on a particular structure to capture the results of various intermediate stages of her analysis, e.g. to help out a new collaborator (Diane) in a new project:

...

  • Consistent structures and hardened pipelines evolve out of ad hoc work supporting common preprocessing
  • Structure work for large scale comparison of methods / data in public or private challenges