...
It turns out that Alice's paper is a hit and now she has lots of biologists asking for help running similar analyses on different data sets. Alice turns her set of scripts into a publicallypublicly-hosted R package. This includes the development of R objects specific to her analysis that encapsulate some of the key steps / data structures that are handed off between different steps. She also includes helper functions that store and retrieve the pieces of the object in Synapse as a set of folders, files, and annotations that follow a particular convention. She then develops a widget for the Synapse UI that presents a visualization of this data in a way understandable by her collaborators. This gives her and other analysts an object-centric view of the data structures relevant to this analysis in R, and the ability to easily load and save these objects to/from Synapse. Other analysts can do the same thing in other environments (e.g. Python) by defining similar objects and helper functions.
Synapse now needs to help Alice run large numbers of analytical pipelines for various collaborators who want her to do it for them. She has them contribute data to their own projects following particular conventions for the raw data, and then runs her pipeline publishing back the results including even auto generating a first draft of the wiki. She then uses Synapse to communicate her results back to these collaborators.
If we have many of these sorts of objects, an extension to this use case is for Synapse to provide central storage, retrieval, of these object definitions, and / or ways to autogenerate the objects and helper functions them from existing synapse data structures used as prototype instances.