Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migrated to Confluence 5.3

...

Open an entry in the Platform AWS Log. It is helpful to think though exactly what you are going to do and write it down.  Then as you execute the change, if you deviate from the steps you wrote in the log, change the log.  In the end when you haven't made any mistakes and everything has gone smoothly you will think this was a waste of time.  It wasn't.  The closer you are to a big demo the more true this will be.

Crowd (skip if using existing crowd deployment)

In most cases you should be re-using existing Crowd instances. We currently have two crowd servers running:

prod: https://dev-crowd.sagebase.org:8443/crowd/console

staging + test (shared): https://prod-crowd.sagebase.org:8443/crowd/console

If setting up a new Crowd server or for help troubleshooting see: Setting Up Production Crowd

If you just need to point a stack at a particular crowd instance, you do this by setting the org.sagebionetworks.crowd.endpoint in the stack.properties file (URLs as above minus the /crowd/console bit)

...

Click on 'Load Balancer' tab

For 'HTTP Listener port' choose 'OFF' for the repo and auth services, choose '80' for the portal.

For 'HTTPS Listener port' choose '443'. 

For 'SSL Cert' choose arn:aws:iam::325565585839:server-certificate/SynapseCert

Configure Notifications

Click on 'Notifications' tab

Set Email Address to 'platform@sagebase.org'

Configure Container

Click on 'container.'

In the JVM Command Line Options For a production deployment:

...

You can find the code for this script here clinicalVariableDescriptionsLoader.py

How to run the Data Loader (Deprecated)

We should be migrating data and maintaining it between version upgrades now. 

Once environments are running, you can populate the system with a set of starting data.  On one of the local servers, goto /work/platform/DatasetMetadataLoader and execute the following:# Make sure you have the latest version
svn up

  1. Execute the loader
  2. Replace <repo_instance> and <auth_instance> by the repository and authentication instances.
  3. Either make sure that <platform_admin_email> is a Synapse administrator on crowd, or replace it by a Synapse administrator account
    python datasetCsvLoader.py -e http://<repo_instance>/repo/v1
    -a http://<auth_instance>/auth/v1 -u <platform_admin_email> -p <platform_admin_pw>
    This will create a publicly-accessible project called Sage BioCuration, and populate it with curated data from Sage's repository data team.

If you need to repopulate the data in S3, pass the -3 argument to the data loader. It upload the data in serial right now so it takes an hour or two. We really should only need to do this if we've messed up our S3 bucket.