Skip to end of banner
Go to start of banner

Dataset Hosting Design

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 14 Next »

Dataset Hosting Design

Assumptions

Where

Assume that the initial cloud we target is AWS but we plan to support additional clouds in the future.

For the near term we are using AWS as our external hosting partner. They have agreed to support our efforts for CTCAP. Over time we anticipate adding additional external hosting partners such as Google and Microsoft. We imagine that different scientists and/or institutions will want to take advantage of different clouds.

We can also imagine that the platform should hold locations of files in internal hosting systems, even though not all users of the platform would have access to files in those locations.

Metadata references to hosted data files should be modelled as a collection of Locations, where a Location could be of many types:

  • an S3 URL
  • a Google Storage URL
  • an Azure Blobstore URL
  • an EBS snapshot id
  • a filepath on a Sage internal server
  • ....
Class Location
    String provider // AWS, Google, Azure, Sage cluster – people will want to set a preferred cloud to work in
    String type     // filepath, download url, S3 url, EBS snapshot name
    String location // the actual uri or path

What

For now we are assuming that we are dealing with files. Later on we can also envision providing access to data stored in a database and/or a data warehouse.

Who

Assume tens of thousands of users will eventually use the platform.

How

Assume that users who only want to download do not need to have an AWS account.

Assume that anyone wanting to interact with data on EC2 or Elastic MapReduce will have an AWS account and will be willing to give us their account number (which is a safe piece of info to give out, it is not an AWS credential).

Design Considerations

  • metadata
    • how to ensure we have metadata for all stuff in the cloud
  • file formats
    • tar archives or individual files on S3?
    • EBS block devices per dataset?
  • file layout
    • how to organize what we have
    • how can we enforce a clean layout for files and EBS volumes?
    • how to keep track of what we have
  • access patterns
    • we want to make the right thing be the easy thing - make it easy to do computation in the cloud
    • file download will be supported but will not be the recommended use case
    • recommendations and examples from the R prompt for interacting with the data when working on EC2
  • security
    • not all data is public
    • encryption or clear text?
      • key management
    • one time urls?
    • intrusion detection
    • how to manage ACLs and bucket policies
      • are there scalability upper bounds on ACLs? e.g., can't add more than X AWS accounts to an ACL
  • auditability
    • how to have audit logs
    • how to download them and make use of them
  • human data and regulations
    • what recommendations do we make to people getting some data from Sage and some data from dbGaP and co-mingling that data in the cloud
  • monitoring - what should be monitored
    • access patterns
    • who
    • when
    • what
    • how much
      • data foot print
      • upload bandwidth
      • download bandwidth
      • archive to cheaper storage unused stuff
  • cost
    • read vs. write
    • cost of allowing writes
    • cost of keeping same data in multiple formats
    • can we take advantage of the free hosting for http://aws.amazon.com/datasets even though we want to keep an audit log?
    • how to meter and bill customers for usage
  • operations
    • how to make it efficient to manage
    • reduce the burden of administrative tasks
    • how to enable multiple administrators
  • how long does it take to get files up/down?
    • upload speeds - we are on the lambda rail
    • shipping hard drives
  • durability
    • data corruption
    • data loss
  • scalability
    • if possible, we want to only be the access grantors and then let the hosting provider take care of enforcing access controls and vending data

High Level Use Cases

Users want to:

  • download a public dataset
  • download a protected dataset for which the platform has granted them access
  • use a public dataset on EC2
  • use a protected dataset for which the platform has granted them access on EC2
  • use a public dataset on Elastic MapReduce
  • use a protected dataset for which the platform has granted them access on Elastic MapReduce

There is a more exhaustive list of stuff to consider above but what follows are some firm and some loose requirements:

  • enforce access restrictions on protected datasets
  • log downloads
  • log EC2/EMR usage
  • figure out how to monitor user usage such that we could potentially charge them for usage
  • think about how to minimize costs
  • think about how to ensure that users sign a EULA before getting access to data

Options to Consider

AWS Public Data Sets

Current Scenario:

  • Sage currently has two data sets stored as "AWS Public Datasets" in the US West Region.
  • Users can discover them by browsing public datasets on http://aws.amazon.com/datasets/Biology?browse=1 and also via the platform.
  • Users can use them for cloud computation by spinning up EC2 instances and mounting the data as EBS volumes.
  • Users cannot directly download data from these public datasets, but once the have them mounted on an EC2 host, they can certainly scp them to their local system.
  • Users are not forced to sign a Sage-specified EULA prior to access since because they can bypass the platform directly and access this data via normal AWS mechanisms.
  • Users must have an AWS account to access this data.
  • There is no mechanism to grant access. All users with AWS accounts are granted access by default.
  • There is no mechanism to keep an audit log for downloads or other usage of this data.
  • Users pay for access by paying their own costs for EC2 and bandwidth charges.
  • The cost of hosting is free.

Future Scenario:

  • this is currently EBS only but it will also be available for S3 in the future
  • TODO ask Deepak what other plans they have in mind for the re-launch of AWS Public Datasets.
  • TODO tell Deepak our suggested features for AWS Public Datasets.

Tech Details:

  • You create a new "Public Dataset" by
    • making an EBS snapshot in each region in which you would like it to be available
    • providing the snapshot id(s) and metadata to Deepak (TODO see if this is still the case)
    • then you wait for Amazon to get around to it

Pros:

  • free hosting!
  • scalable

Cons:

  • this won't work for public data if it is a requirement that
    • all users provide an email address and agree to a EULA prior to access
    • we must log downloads
  • this won't work for protected data unless the future implementation provides more support

S3

Skipping a description of public data on S3 because the scenario is very straightforward - if get the URL you can download the resource. For example: http://s3.amazonaws.com/nicole.deflaux/ElasticMapReduceFun/mapper.R

Protected Data Scenario:

  • All Sage data is stored on S3 and is not public.
  • Users can only discover what data is available via the platform.
  • Users can use the data for cloud computation by spinning up EC2 instances and downloading the files from S3 to the hard drive of their EC2 instance.
  • Users can download the data from S3 to their local system. See below for more details on this.
  • The platform directs users to sign a Sage-specified EULA prior to gaining access to these files in S3.
  • Users must have a Sage platform account to access this data for download.  They may need an AWS account for the cloud computation use case depending upon the mechanism we use to grant access.
  • The platform grants access to this data. See below for details about the various ways we might do this.
  • The platform will write to the audit log each time it grants access and to whom it granted access. S3 can also be configured to log all access to resources and this could serve as a means of intrusion detection.  
    • These two types of logs log different events (granting access vs. using access) so they will not have a strict 1-to-1 mapping between entries but should have a substantial overlap.  
    • The platform can store anything it likes in its audit log.  
    • The S3 log stores normal web access log type data with the following identifiable fields:
      • client IP address is available in the log
      • "anonymous" or the users AWS canonical user id will appear in the log
    • We can try to appending some other query parameter to the S3 URL to help us line it up with audit log entries.
  • See proposals below regarding how users might pay for usage.
  • The cost of hosting not free.
    • Storage fees will apply.
    • Bandwidth fees apply when data is uploaded.
    • Data can also be shipped via hard drives and AWS Import fees would apply.
    • Bandwidth fees apply when data is downloaded out of AWS. There is no charge when it is downloaded inside AWS (e.g., to an EC2 instance).
    • These same fees apply to any S3 log data we keep.

Open Questions:

  • can we use the canonical user id to know who the user is if they have previously given us their AWS account id?
  • if we stick our own query params on the S3 URL will they show up in the S3 log?

Resources:

  • Best Effort Server Log Delivery:
    • The server access logging feature is designed for best effort. You can expect that most requests against a bucket that is properly configured for logging will result in a delivered log record, and that most log records will be delivered within a few hours of the time that they were recorded.
    • However, the server logging feature is offered on a best-effort basis. The completeness and timeliness of server logging is not guaranteed. The log record for a particular request might be delivered long after the request was actually processed, or it might not be delivered at all. The purpose of server logs is to give the bucket owner an idea of the nature of traffic against his or her bucket. It is not meant to be a complete accounting of all requests.
  • Usage Report Consistency
    • It follows from the best-effort nature of the server logging feature that the usage reports available at the AWS portal might include usage that does not correspond to any request in a delivered server log.
  • Log format details

Options to Restrict Access to S3

S3 Pre-Signed URLs for Private Content

Pros:

  • Simple! These can be used for both the download use case and the cloud computation use cases.
  • Scalable
  • We control duration of the expiry window. All URLs might be good for 1 minute.

Cons:

  • When the url has not yet expired, it is possible for others to use that same URL to download files.
  • If a user gets her download url, and does not use it right away, she'll need to reload the WebUI page to get another or re-do the R prompt command.

Open Questions:

  • How does this work with the new support for partial downloads?
  • Does this work with torrent-style access?
  • Can we limit S3 access to HTTPS only?

Resources:

S3 Bucket Policies

This is the newer mechanism from AWS for access control.

Open Questions:

  • What is the upper limit on the number of grants?
  • What is the upper limit on the number of principals that can be listed in a single grant?

Resources:

  • The following list describes the restrictions on Amazon S3 policies: from http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?AccessPolicyLanguage_SpecialInfo.html
    • The maximum size of a policy is 20 KB
    • The value for Resource must be prefixed with the bucket name or the bucket name and a path under it (bucket/). If only the bucket name is specified, without the trailing /, the policy applies to the bucket.
    • Each policy must have a unique policy ID (Id)
    • Each statement in a policy must have a unique statement ID (sid)
    • Each policy must cover only a single bucket and resources within that bucket (when writing a policy, don't include statements that refer to other buckets or resources in other buckets)

S3 ACL

This is the older mechanism from AWS for access control.

This is ruled out for protected data because ACLs can have a max of 100 grants and it appears that these grants cannot be to groups such as groups of arbitrary AWS users.

Open Question:

  • Confirm that grants do not apply to groups of AWS users.

Resources:

S3 and IAM

With IAM a group of users can be granted access to S3 resources. This will be helpful for managing access Sage system administrators and Sage employees.

This is ruled out for protected data because IAM is used for managing groups of users all under a particular AWS bill (e.g., all employees of a company).

Open Questions:

  • Is there a cap on the number of users for IAM?
  • Confirm that IAM only intended for managing groups and users where the base assumption is all activity is rolling up to a single AWS bill.

Resources:

Cloud Front Private Content

Cloud Front supports the notion of private content. Cloud front urls can be created with access policies such as an expires time and an IP mask.

Pros:

  • This would work for the download of protected content use cases.
    Cons:
  • Note that this is likely a bad solution for the EC2/EMR use cases because Cloud Front sits outside of AWS and users will incur the inbound bandwidth charges when they pull the data down to EC2/EMR.
  • There is an additional charge on top of the S3 hosting costs.

Open Questions:

  • Since we do not need a CDN for the normal reason (to move often requested content closer to the user to reduce download times), does this buy us much over S3 Pre-Signed URLs? It seems like the only added benefit is an IP mask in addition to the expires time.

Resources:

Custom Proxy to S3

All files are kept completely private on S3 and we write a custom proxy that allows users with permission to download files whether to locations outside AWS or to EC2 hosts.

Pros:

  • full flexibility

Cons:

  • we are now the scalability bottleneck

If all else fails we can do this, but it will be more work operationally to manage a fleet of custom proxy servers.

Options to have customers bear some of the costs

S3 "Requester Pays" Buckets

Scenario:

  • The platform requires that users give us their AWS account number for download use cases.

Open Questions:

Resources:

  • In general, bucket owners pay for all Amazon S3 storage and data transfer costs associated with their bucket. A bucket owner, however, can configure a bucket to be a Requester Pays bucket. With Requester Pays buckets, the requester instead of the bucket owner pays the cost of the request and the data download from the bucket. The bucket owner always pays the cost of storing data.
  • Typically, you configure buckets to be Requester Pays when you want to share data but not incur charges associated with others accessing the data. You might, for example, use Requester Pays buckets when making available large data sets, such as zip code directories, reference data, geospatial information, or web crawling data.
  • Important: If you enable Requester Pays on a bucket, anonymous access to that bucket is not allowed.
  • You must authenticate all requests involving Requester Pays buckets. The request authentication enables Amazon S3 to identify and charge the requester for their use of the Requester Pays bucket.
  • After you configure a bucket to be a Requester Pays bucket, requesters must include x-amz-request-payer in their requests either in the header, for POST and GET requests, or as a parameter in a REST request to show that they understand that they will be charged for the request and the data download.
  • Requester Pays buckets do not support the following.
    • Anonymous requests
    • BitTorrent
    • SOAP requests
    • You cannot use a Requester Pays bucket as the target bucket for end user logging, or vice versa. However, you can turn on end user logging on a Requester Pays bucket where the target bucket is a non Requester Pays bucket.
  • http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?RequesterPaysBuckets.html

Dev Pay

The Requester Pays feature (used alone) lets you give other Amazon S3 users access to your data, but you can't make a profit; you can only avoid paying data transfer and request costs.

DevPay (used alone) lets you give access to your data to anyone who is signed up for your product (regardless if they're Amazon S3 users). But because the DevPay bucket isn't a Requester Pays bucket, you (as the owner of the bucket) still pay for data transfer and requests (at the DevPay product's price).

You need to use DevPay together with a Requester Pays bucket if you want to charge people a premium to download your data (the overall process is described in Selling Your Data). Note that because you're using DevPay, your customers don't have to be Amazon S3 users.

When you use DevPay with your Requester Pays bucket, your customers download your data to a location outside Amazon S3. They don't copy the data from your bucket to theirs.

This is for the cloud compute use case?

http://docs.amazonwebservices.com/AmazonS3/2006-03-01/dev/index.html?UsingDevPay.html

Dev Pay + S3 Requester Pays

This is for the download use case.

Resources

Flexible Payments Service, PayPal, etc.

We can use general services for billing customers. It would be up to us to determine what is a transaction, how much to charge, etc.

We may need to keep out own transaction ledger and issue bills. We would definitely let another company deal with credit card operations.

EBS

Data is available as hard drive snaphots.

Pros:

  • Its a convenient way to access data from EC2 instances.

Cons:

  • This only covers our cloud compute use case, not our download use case.

Recommendation: focus on S3 for now since it can meet all our use cases. Work on this later if customers ask for it due to its convenience in a cloud computing environment.

EBS snapshot ACL

Open questions:

  • What is the max grant number for this?

Resources

File Organization and Format

TODO Brian will add stuff here

Resources:

Additional Details

Network Bandwidth

Hypothetically, the Hutch (and therefore Sage) is on a high throughput link to AWS.

The Pacific Northwest Gigapop is the point of presence for the Internet2/Abilene network in the Pacific Northwest. The PNWGP is connected to the Abilene backbone via a 10 GbE link. In turn, the Abilene Seattle node is connected via OC-192              links to both Sunnyvale, California and Denver, Colorado.
PNWPG offers two types of Internet2/Abilene interconnects: Internet2/Abilene transit services and Internet2/Abilene peering at Pacific Wave International Peering Exchange. See Participant Services for more information.

The Corporation for Education Network Initiatives in California (CENIC) and Pacific NorthWest GigaPoP (PNWGP) announced two 10 Gigabit per second (Gbps) connections to Amazon Simple Storage Service (Amazon S3) and Amazon Elastic Compute Cloud (Amazon EC2) for the use of CENIC's members in California, as well as PNWGP's multistate K-20 research and education community.

http://findarticles.com/p/news-articles/wireless-news/mi_hb5558/is_20100720/cenic-pacific-northwest-partner-develop/ai_n54489237/http://www.internet2.edu/maps/network/connectors_participantshttp://www.pnw-gigapop.net/partners/index.html

  • No labels