Skip to end of banner
Go to start of banner

Synapse scenarios: File / Folder API

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 6 Next »

Ad hoc analysis

Alice is an data analyst / computational scientist starting a new project.  She creates a folder in her linux home directory and populates it with a set of files (e.g. starting data).  She starts some exploratory statistical analysis in an interactive R session.  After some time she arrives at some preliminary findings she wants to share with a biologist (Bob) she is collaborating with.  At this point she creates a new Synapse project.  At OS command line, she runs a command like:

  syn add . syn1234 -recurse = true -location = local  //Where syn1234 is syn id of her project.

or, more naturally

  syn add . AlicesProject - recurse = true -location = local

At this point her Synapse project is populated with a mirror of her local filesystem folder, although all the files are still living exclusively on her local file system.  Synapse has some metadata on the files and folders (e.g. SHA1, timestamp and user of when they were created, maybe file size).

Now, in her R session she has a plot and a data frame she'd also like to add to the project.  At R command line:

  synAdd('AlicesProject/localTopFolder', aRDataframe)
  synAdd('AlicesProject/localTopFolder', aRPlot)

These commands save the dataframe and plot to files.  Location defaults to Synapse S3 storage, so there are now two additional files in the Synapse project.  As these were pushed up to S3, Synapse generates previews for the plot and dataframe files.

At local OS command line

  syn get . -recurse = true

Pulls down the two new plot files locally.  Dataframe could be either .csv or Rbinary file.

At this point she switches over to the Synapse web client and uses previews of the two new results files to write up a summary of her findings in the project wiki. Then she adds Bob to the project and emails him a link to view the results.

Bob is able to review Alice's findings, comment on the wiki pages.  He's got some new data he wants to share with Alice so he uploads it to the project from the web client.  Alice is able to pull the files down using an analytical client and continue working.

Later, Alice would like her analyst friend Carl at another institution to check her analysis.  (or would like a backup of her work, or access to it from another machine...)

  syn put . -recurse = true -location = SynapseStorage

This pushes the files up to Synapse's native S3 storage.  Carl can now move the project over to his own computer, or his Amazon account.  (Why not just sync files using Git, or Dropbox, or any number of other solutions?  Assume some of the files are large. e.g. raw genomics data.  In this case files always remain local, and if Carl wants to access them he will get an account on Alice's system.  Different folders of the project might be stored in different places.)

The project could evolve for sometime in this fashion, mainly relying on the file-folder API, wiki, and collaboration features.  Extensions could be to have users manage multiple storage locations (e.g. their own S3 buckets), or have clients that automatically synched content in the background.

Reproducible Ad hoc analysis

After some time, Alice has a result she believes is important and will eventually form part of a paper, and she wants to make sure Carl can see exactly what she did.  At this point she builds a set of R scripts which process the data though a series of steps.  She stores the scrips in a GitHub repository associated with the project.  She also uses a few bioinformatics tools from the command line of linux as part of her process.  Now, she re-runs the analysis, this time recording what she did using Synapse provenance features to link all the files starting with raw data through all intermediate results and ending with a set of figures, vectors, and other output data.  All this can be pushed up to Synapse as before, but now there is a graphical representation of her process available in Synapse that Carl can use to review her work, including links to the code and tools she used.  (Command line client would need to push up the commands used to run tools at the linux command line).  If Carl and Alice are working on the same system, access to the code or commands to execute system programs should give Carl a pretty good idea of exactly what Alice did, and she can provide additional commentary in the wiki.

An extension of this scenario in the case where both users are working in Amazon would include capturing the specifics of the environment used to run the analysis (AMI, size, etc) as additional parts of the provenance record.  These environment descriptions could be stored as Files pointing to publicly-accessible AMIs, allowing anyone to execute the work (in their own AWS account).  In fact, before Alice may want to run the analysis on Amazon again before publication to ensure that her reviewer can step into her analysis, using her project as supplemental materials to her paper.

Pipelined Analysis

It turns out that Alice's paper is a hit and now she has lots of biologists asking for help running similar analyses on different data sets.  Alice turns her

 



 

 

  • No labels