Document toolboxDocument toolbox

Synapse Stack Deployment

Synapse Deployment Instructions

Note that all Synapse resources should be created in US East, as Beanstalk is not yet available everywhere.

Tell the platform team what you are going to do

Really.Ā  Make sure everyone has checked in what you think should be checked in for the release. Make sure Synapse customers have a heads up as well. Give yourself a bit of a buffer between when checkins are done and when you need to have the system cut over.Ā 

Log what you do to AWS Platform account in the Log

Open an entry in theĀ Platform AWS Log. It is helpful to think though exactly what you are going to do and write it down.Ā  Then as you execute the change, if you deviate from the steps you wrote in the log, change the log.Ā  In the end when you haven't made any mistakes and everything has gone smoothly you will think this was a waste of time.Ā  It wasn't.Ā  The closer you are to a big demo the more true this will be.

Crowd (skip if using existing crowd deployment)

In most cases you should be re-using existing Crowd instances. We currently have two crowd servers running:

prod: https://dev-crowd.sagebase.org:8443/crowd/console

staging + test (shared): https://prod-crowd.sagebase.org:8443/crowd/console

If setting up a new Crowd server or for help troubleshooting see: Setting Up Production Crowd

If you just need to point a stack at a particular crowd instance, you do this by setting the org.sagebionetworks.crowd.endpoint in the stack.properties file (URLs as above minus the /crowd/console bit)

Get the build artifacts from Artifactory

You should not be deploying anything you built yourself on your local machine. Ā Only deploy build artifacts generated by Bamboo and stored in http://sagebionetworks.artifactoryonline.comArtifactory as the result of a clean build. Ā SeeĀ http://sagebionetworks.jira.com/wiki/display/PLFM/Branching+and+TaggingBranching and TaggingĀ for information about managing the build, branch, and tag process.Ā  For a full upgrade you will need 3 .war files out of artifactory for a deployment: services-repository<version>.war, services-authentication-<version>.war, and portal-<version>.war. Ā Each must go into its own Beanstalk environment.Ā -

The specific steps are:

  1. log in to: http://sagebionetworks.artifactoryonline.com/http://sagebionetworks.artifactoryonline.com/
  2. Go to the Artifacts tab
    1. For a snapshot build go to: libs-snapshots-local > org > sagebionetworks > [project] > [version]- SNAPSHOT > [project][version]SNAPSHOT.war
    2. For a released version go to: libs-releases-local > org > sagebionetworks > [project] > [version] > [project][version].war-
  3. Click download
  4. Now log into the AWS console
  5. click on the "Elastic Beanstalk" tab
  6. Select the 'stack' (Synapse or Synapse-Staging)Ā  Note that you will have to upload the .war file into each stack, or what Beanstalk calls an "Application"
  7. From here, you can either just upload the wars as new versions without deploying if you are going to build new environments, or upload and deploy in one step if your environments already exist.
  8. To update an environment in place
    1. A number of "Environments" will be listed.Ā  Click on "Environment Details" for the environment of interest.
    2. Click on "Deploy a different version."
    3. Click the radio button "Upload and deploy a new version"
    4. To label the version, follow the naming convention given here: http://sagebionetworks.jira.com/wiki/display/PLFM/Branching+and+Tagginghttp://sagebionetworks.jira.com/wiki/display/PLFM/Branching+and+Tagging
    5. Upload the .war file that you downloaded from Artifactory.
    6. Your new .war file will now be deployed to Elastic Beanstalk.
    7. Ā Repeat for additional war(s) that need upgrades, then skip ahead to verification
  9. Alternately, if you are going to build new environments, you can just upload the wars and label the new versions for later use.

Create or Configure MySQL RDS Service (Skip this section if using existing Environments.)

SeeĀ Synapse Database Setup and Configuration for details on how to create a new schema for a new stack or instance.Ā  The staging and production stacks use Amazon's RDS service.Ā  Currently, both stacks use different databases in the same RDS instance.Ā  The same RDS service also holds the ID Generator db, as well as data for Crowd.

Create Beanstalk Environments (Skip this section if using existing Environments.)

log in to AWS

http://aws.amazon.com/console/

as platform@sagebase.org (get the password frome someone in the Platform department).

Click "Launch New Environment"

set environment name, e.g. "Prod-Auth"

choose or upload an "application version" (which is a WAR file)

Default AMI (32 bit Linux server running Tomcat v 7)

Instance type: t1.micro

Key Pair: PlatformKeyPairEast

email: platform@sagebase.org

Create two more, so that there is one for Auth services, one for Repo services, and one for SynapseWeb

Configure Environments

The configuration of all environments for all Synapse components should be the same, with the exception that we leave port 80 on the web app load balancer open and closed everywhere else.

Configure Server

Click on 'edit configuration' in the Beanstalk UI, start on 'Server' tab:

EC2 Instance Type=t1.micro

EC2 Security Groups=elasticbeanstalk-default

Existing Key Pair=PlatformKeyPairEast

Monitoring Interval=5 minute

Custom AMI ID=ami-524db23b

Configure Load Balancer

Click on 'Load Balancer' tab

For 'HTTP Listener port' choose 'OFF' for the repo and auth services, choose '80' for the portal.

For 'HTTPS Listener port' choose '443'.Ā 

For 'SSL Cert' choose arn:aws:iam::325565585839:server-certificate/SynapseCert

Configure Notifications

Click on 'Notifications' tab

Set Email Address to 'platform@sagebase.org'

Configure Container

Click on 'container.'

In the JVM Command Line OptionsĀ For a production deployment:

BLANK

For a non-production deployment:

-DACCEPT_ALL_CERTS=true

For all deployments:

AWS_ACCESS_KEY_ID = <<aws access key id>>

AWS_SECRET_KEY = <<aws secret key>>

PARAM1 = <<url to .properties file in S3>>

PARAM2 = <<encryption key>>

PARAM3 = <<stack name>>

PARAM4 = <<instance name>>

This is the minimum information needed to bootstrap our system with the information needed to load a configuration via a .properties file. Ā Here, the actual .properties file should be loaded in S3 as described below

Setting up a Properties file in S3 (Skip this section if using existing Environments.)

For each stack, we have created a unique IAM User, encryption key, and configuration file. Ā These values are passed into the container of the environments as described above. Ā AWS access key ids, secret keys, encryption keys, and the url for an environment can be found on sodo at /work/platform/PasswordsAndCredentials/StackCredentials/IAMUsers in the appropriate .csv file. Ā All stack environments run under this IAM User, and have permission to access their configuration file from S3. Ā Configuration files can be loaded / updated in S3 under theĀ elasticbeanstalk-us-east-1-325565585839 bucket (this is the same place the .war files are deployed). Ā This will give URLs of the form https://s3.amazonaws.com/elasticbeanstalk-us-east-1-325565585839/<stack-name><Instance-name>-stack.propertiesĀ  If you are creating a new stack, you will have to create the IAM user and grant that user access to access the configuration file using the IAM tab of the AWS console. Ā In most cases you should be able to keep the configuration the file the same, or replace it with a new file of the same name.Ā  Note that the stack and instance names embedded in the .properties file must match the names passed in to the environment via PARAM3 and PARAM4; this is a safety feature to reduce the risk of wiring the wrong property file to the wrong environment.

Note that if you are setting up a .properties file, any field that is a password should be encryped. Ā You can encrypt strings by running StringEncrypter, passing in two arg's: (1) the string to be encoded, (2) the aforementioned encryption key.

Migrate data from old stack (Skip this section if using existing Environments.)

See the page on Repository Administration for instructions on how to backup and restore data from Synapse schemas.Ā  To migrate data from one instance to another in a stack the current procedure is to take a back up of the old stack, shut the stack down, and then copy the data to the new stack.Ā  Note there is a small risk of data changed in the old stack being lost if somebody adds something to the repository after the backup process has completed.Ā  This will be addressed by PLFM-404ā€‰ā€‰ā€‰ā€‰ā€‰ā€‰.Ā  (Even if you shut down the Synapse web portal before you take the backup, changes can still come in via the repo API, which must be up to take the backup.)Ā  In meantime, workaround by communicating with team members and our small user base.

Update CNAMES (Skip this section if using existing Environments.)

Sign in to GoDaddy, select sagebase.org,Ā  and launch Domain Manager. We have defined public URLs for the various stacks and components, e.g. synapse-staging (.sagebase.org) for the web app, auth-staging for auth, etc.Ā  Point these to the elastic beanstalk URL, which should be something of the form stackName-componentName.elasticbeanstalk.com.

Once you have CNAMES pointed to the new stack, update stackInstance-stack.properties file, upload to S3, and restart the app servers to apply the change.Ā  Having our components talk to each other via the public aliases avoids security exceptions.Ā  See PLFM-506ā€‰ā€‰ā€‰ā€‰ā€‰.

Deploy From Artifactory

Create an IAM credentials file, using the platform credentials, per http://stackoverflow.com/questions/5396932/why-are-no-amazon-s3-authentication-handlers-ready

The IAM key should be AWSAccessKeyId=AKIAINNFCDBA3NBOQO2Q

Point to this file from the environment variable AWS_CREDENTIAL_FILE

In trunk\tools\SynapseDeployer\main.py
set the following
- version = '0.8' # set to the actual version to be deployed
- isSnapshot = True
- beanstalk_application_name =
set to 'Synapse-Staging' for staging, 'Synapse' for Synapse
- componentsToUpgrade: set to the target stack, e.g. 'prod-b-auth' for stack 'b' of alpha/prod
- make sure deploymentBucket=elasticbeanstalk-us-east-1-325565585839

In the directory trunk\tools\SynapseDeployer, start the python interpreter, then type:

import sys
sys.path.append("boto-2.1.1")
import main

Verify Deployment

To verify deployment, run top-level queries against the repository instances from an authenticated account.

Make sure you can download the MSKCC clinical data layer from S3.

TODO: Add queries and expected counts returned.

Build and deploy R packages

See R Package Builds for details of how to do this. You might ask Nicole to do this with you if you are new to it.

See Managing R packages for how to install the lastest version on Belltown.

Once the latest version in deployed to the CRAN server, you should upgrade the R Synapse client version on Belltown. An upgrade script is available inĀ /work/platform/configuration/deployment/synapseRClient.

cd /work/platform/configuration/deployment/synapseRClient
sudo ./upgradeSynapseClient.sh

Make sure to check the output for error messages.

How to run the Phenotype Descriptions Loader

Run this on the shared servers where the datasets live.

For just one dataset:

cd /work/platform/DatasetMetadataLoader
./clinicalVariableDescriptionsLoader.py -e https://repo-staging.sagebase.org/repo/v1 -a https://auth-staging.sagebase.org/auth/v1 \
--user <platform_admin_email> --password <platform_admin_pw> \
--layerId 3965 --descriptionFile /work/platform/source/sanger_cell_lines/phenotype/description.txt

For all datasets it reads from AllDatasetLayers.csv:

cd /work/platform/DatasetMetadataLoader
./clinicalVariableDescriptionsLoader.py -e https://repo-staging.sagebase.org/repo/v1 -a https://auth-staging.sagebase.org/auth/v1 \
--user <platform_admin_email> --password <platform_admin_pw>

You can find the code for this script here clinicalVariableDescriptionsLoader.py