Ad hoc analysis - private & exploratory
Alice is an data analyst / computational scientist starting a new project. She creates a folder in her linux home directory and populates it with a set of files (e.g. starting raw data) obtained from Bob, her biologist friend. She starts some exploratory statistical analysis in an interactive R session.
Benefits (from Justin)
Organization. Notebook for future self.
Access in multiple environments.
Ad hoc analysis - Collaborative
After some time she arrives at some preliminary findings she wants to share with a biologist (Bob) she is collaborating with. At this point she creates a new Synapse project. At OS command line, she runs a command like:
syn add . syn1234 -recurse = true -location = local //Where syn1234 is syn id of her project.
or, more naturally
syn add . AlicesProject - recurse = true -location = local
At this point her Synapse project is populated with a mirror of her local filesystem folder, although all the files are still living exclusively on her local file system. Synapse has some metadata on the files and folders (e.g. SHA1, timestamp and user of when they were created, maybe file size).
Now, in her R session she has a plot and a data frame she'd also like to add to the project. At R command line:
synAdd('AlicesProject/localTopFolder', aRDataframe)
synAdd('AlicesProject/localTopFolder', aRPlot)
These commands save the dataframe and plot to files. Location defaults to Synapse S3 storage, so there are now two additional files in the Synapse project. As these were pushed up to S3, Synapse generates previews for the plot and dataframe files.
At local OS command line
syn get . -recurse = true
Pulls down the two new plot files locally. Dataframe could be either .csv or Rbinary file.
At this point she switches over to the Synapse web client and uses previews of the two new results files to write up a summary of her findings in the project wiki. Then she adds Bob to the project and emails him a link to view the results.
Bob is able to review Alice's findings, comment on the wiki pages. He's got some new data he wants to share with Alice so he uploads it to the project from the web client. Alice is able to pull the files down using an analytical client and continue working.
Later, Alice would like her analyst friend Carl at another institution to check her analysis. (or would like a backup of her work, or access to it from another machine...)
syn put . -recurse = true -location = SynapseStorage
This pushes the files up to Synapse's native S3 storage. Carl can now move the project over to his own computer, or his Amazon account. (Why not just sync files using Git, or Dropbox, or any number of other solutions? Assume some of the files are large. e.g. raw genomics data. In this case files always remain local, and if Carl wants to access them he will get an account on Alice's system. Different folders of the project might be stored in different places.)
The project could evolve for sometime in this fashion, mainly relying on the file-folder API, wiki, and collaboration features. Extensions could be to have users manage multiple storage locations (e.g. their own S3 buckets), or have clients that automatically synched content in the background.
Reproducible Ad hoc analysis
After some time, Alice has a result she believes is important and will eventually form part of a paper, and she wants to make sure Carl can see exactly what she did. At this point she builds a set of R scripts which process the data though a series of steps. She stores the scrips in a GitHub repository associated with the project. She also uses a few bioinformatics tools installed on her local system from the command line of linux as part of her process. Now, she re-runs the analysis, this time recording what she did using Synapse provenance features to link all the files starting with raw data through all intermediate results and ending with a set of figures, vectors, and other output data. All this can be pushed up to Synapse as before, but now there is a graphical representation of her process available in Synapse that Carl can use to review her work, including links to the code and tools she used. (Command line client would need to push up the commands used to run tools at the linux command line). If Carl and Alice are working on the same system, access to the code or commands to execute system programs should give Carl a pretty good idea of exactly what Alice did, and she can provide additional commentary in the wiki and/or edit the provenance records to provide more details (e.g. version info for some of the tools she used).
An extension of this scenario in the case where both users are working in Amazon would include capturing the specifics of the environment used to run the analysis (AMI, size, etc) as additional parts of the provenance record. These environment descriptions could be stored as Files pointing to publicly-accessible AMIs, allowing anyone to execute the work (in their own AWS account). In fact, Alice may want to rerun the analysis on Amazon again before publication to ensure that her reviewer can step into her analysis, using her project as supplemental materials to her paper.
Pipelined Analysis
It turns out that Alice's paper is a hit and now she has lots of biologists asking for help running similar analyses on different data sets. Alice turns her set of scripts into a publicly-hosted R package. This includes the development of R objects specific to her analysis that encapsulate some of the key steps / data structures that are handed off between different steps. She also includes helper functions that store and retrieve the pieces of the object in Synapse as a set of folders, files, and annotations that follow a particular convention. She then develops a widget for the Synapse UI that presents a visualization of this data in a way understandable by her collaborators. This gives her and other analysts an object-centric view of the data structures relevant to this analysis in R, and the ability to easily load and save these objects to/from Synapse. Other analysts can do the same thing in other environments (e.g. Python) by defining similar objects and helper functions.
Synapse now needs to help Alice run large numbers of analytical pipelines for various collaborators who want her to do it for them. She has them contribute data to their own projects following particular conventions for the raw data, and then runs her pipeline publishing back the results including even auto generating a first draft of the wiki. She then uses Synapse to communicate her results back to these collaborators.
If we have many of these sorts of objects, an extension to this use case is for Synapse to provide central storage, retrieval, of these object definitions, and / or ways to autogenerate the objects and helper functions them from existing synapse data structures used as prototype instances.