Skip to end of banner
Go to start of banner

About

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 17 Next »

The NF Data Portal is a public data repository that stores and shares data generated by multiple collaborative research programs focused on neurofibromatosis (NF) diseases (neurofibromatosis type 1, type 2, and schwannomatosis).

Because NF diseases are relatively rare, samples and data generated in this area are precious. This is why open science principles are so valuable—data sharing and collaboration maximizes the value and impact of research. The NF Data Portal exists to support this joint effort. Read more about open science and data sharing here.

The open access data in the NF Data Portal is generated by studies backed by funding organizations, including Children’s Tumor Foundation (CTF)Neurofibromatosis Therapeutic Acceleration Program (NTAP), and Gilbert Family Foundation (GFF). Some studies are funded in conjunction with one another—such a group of funded projects is called an initiative on the portal. Funded studies produce data files, metadata (also called annotations), publications, and biological tools like cell lines, mouse models, and plasmids. Study products, such as multiple data files, may be bundled into a dataset, or explored and analyzed with computational tools developed by the community.

Learn about the organizational structure and contents of the portal here.

History

The NF Data Portal was built by members of the NF Open Science Initiative (NF-OSI), an alliance that was formed to support open science within the neurofibromatosis and schwannomatosis research community. NF-OSI is the result of a collaboration effort between Children’s Tumor Foundation (CTF) and the Neurofibromatosis therapeutic acceleration program (NTAP) that dates back to 2014 (you can read more about these programs here). This is an open effort focusing on finding NF treatments by sharing data and analysis results with the broader community—an effort aided by the existence of the NF Data Portal.

Our Data Lifecycle

The studies featured on the NF Data Portal embrace open science principles and operate under a regulated data lifecycle:

As a user of the portal, you can engage with as many of these stages as fits your needs. Each stage of the data lifecycle manifests on the portal in different ways.

  1. During the data generation phase, data are uploaded to the data storage platform, Synapse. During this phase, the data are typically not available for download on the portal, but some information is exposed, such as study title, study description, and metadata.

  2. Data curation mostly occurs behind the scenes of the portal on our data storage platform, Synapse, but this stage enables data discovery, powering search and exploration on the portal.

  3. Data analysis surfaces on the portal through biological and computational tools and is boosted through information available on the portal such as metadata and provenance.

  4. Data interpretation is enabled through various Synapse features, including wikis and discussion forums, but can also be explored on the portal via published data, associated publications, and tools.

  5. Data dissemination includes the NF Data Portal, journal publications, and other means of data distribution.

For an in-depth review of the NF Data Portal’s community engagement and structure, please see our article in Scientific Data.

NF Data Portal ↔︎ Synapse

At this point, you know what the NF Data Portal is, and have likely come across the term Synapse - but how do they fit together? Let’s break this down:

Sage Bionetworks

First, there’s Sage Bionetworks - a name you may or may not have come across. While Sage is not a tool you’ll be using, you should know what it is—the company behind all of this! We are a non-profit organization based out of Seattle, Washington. Sage is dedicated to promoting and advancing open science, as well as engaging patients in the research process. Sage acts as the Data Coordinating Center (DCC) for several different portals, including the NF Data Portal. The scientists, developers, and designers that built the tools you’re using are all employed by Sage. You can learn more about Sage Bionetworks and its initiatives here.

Synapse

In line with advocating for open science, Sage developed a software platform called Synapse. This platform is what allows fo collaborative data curation and analysis, computational modelling, and more. It allows users to upload, store, analyze, and tack your data in a private space, before releasing it to the public-facing NF Data Portal. Think if Synapse as the back-end for all the data to live in.

NF Data Portal

If Synapse is the back-end for data, the NF Data Portal is the front. It’s essentially the user interface or entry point for you to view data and other shared content. Data gets uploaded into Synapse, where it is then processed into readable form for you to access in the portal.

NF Data Standards

Data standards underpin data sharing and make it possible to access data. Data standards involve:

  • metadata (information about data)

  • schemas (collections of data attributes/keys, descriptions, and valid values—in tabular data, attributes are usually represented as column headers)

  • ontology (the terminology, or values, used in the data)

  • any other imposed rules that enable data sharing

Where possible, Sage Bionetworks models its data standards on established global standards to promote interoperability across platforms, in support of FAIR data sharing. When these components work together, data standards allow users to find data, and ensure all information is present for successful reuse and analysis.

The majority of data available in the NF Data Portal is sequencing data such as RNA sequencing and whole exome sequencing, though we also have a variety of imaging assays and other data. We derive most of our data standards, and collection of standardized keys and values from vetted sources such as the National Cancer Institute’s Genomic Data Commons (NCI’s GDC) and NCI Thesaurus. If you already use or consult those standards, many of NF’s standards will be familiar to you.

Metadata Standards

For the most part, we collect scientific metadata that documents information about the experimental assay—for example, with sequencing data, information such as:

  • type of assay (assay)

  • platform used (platform)

  • library preparation type (libraryPrep)

  • read (readLength, readPairOrientation, readStrandOrigin). 

However, we also collect information related to the data project, such as:

  • who funded the project (fundingAgency)

  • what initiative/consortium it’s associated with (initiative)

  • the study’s title and ID (studyName, studyID)

  • general information about the data (filename, fileFormat, resourceTypedataType, dataSubtype)

Metadata is provided in CSV files, so think about this information in terms of a spreadsheet.

The attributes listed above (such as type of assay, platform used, study ID) are called keys, and would appear as the column headers in a spreadsheet.

The items associated with those keys (such as assay, platform, studyID) are called values, and would appear in the spreadsheet cells.

To allow for data standards, we control the terminology used for values through (meta)data dictionaries and other tools. Using controlled vocabularies and other data standards allows you to find what you’re looking for on the portal, so that you don’t have to search through multiple terms for the same thing. For example, instead of ribonucleic acid sequencing, or RNA-Seq, we use the value rnaSeq.

  • No labels

0 Comments

You are not logged in. Any changes you make will be marked as anonymous. You may want to Log In if you already have an account.