...
Note that all Synapse resources should be created in US East, as Beanstalk is not yet available everywhere.
Table of Contents |
---|
Get the
...
build artifacts from Artifactory
You should not be deploying anything you built yourself on your local machine. Only deploy build artifacts generated by Bamboo and stored in Artifactory as the result of a clean build. See Branching and Tagging for information about managing the build, branch, and tag process. You will need 3 .war files out of artifactory for a deployment: services-repository-<version>.war, services-authentication-<version>.war, and portal-<version>.war. Each must go into its own Beanstalk environment.
...
Create or Configure MySQL RDS Service
See See Synapse Database Setup |wiki/display/PLFM/Synapse+Database+Setup+ and + Configuration||||\ for details on how to create a new schema for a new stack or instance. The staging and production stacks use Amazon's RDS service. Currently, both stacks use different databases in the same RDS instance. The same RDS service also holds the ID Generator db, as well as data for Crowd.
...
Sign in to GoDaddy, select sagebase.org, and launch Domain Manager. Create synapse-prod We have defined public URLs for the various stacks and components, e.g. synapse-staging (.sagebase.org) and point it to prod-synapseweb.elasticbeanstalk.comDitto for auth-prod and reposvc-prod
How to run the Data Loader
Once environments are running, you can populate the system with a set of starting data. On one of the local servers, goto /work/platform/DatasetMetadataLoader and execute the following:
Code Block |
---|
# Make sure you have the latest version
svn up
# Execute the loader
# Replace <repo_instance> and <auth_instance> by the repository and authentication instances.
# Either make sure that <platform_admin_email> is a Synapse administrator on crowd, or replace it by a Synapse administrator account
python datasetCsvLoader.py -e http://<repo_instance>/repo/v1
-a http://<auth_instance>/auth/v1 -u <platform_admin_email> -p <platform_admin_pw>
|
This will create a publicly-accessible project called Sage BioCuration, and populate it with curated data from Sage's repository data team.
If you need to repopulate the data in S3, pass the -3 argument to the data loader. It upload the data in serial right now so it takes an hour or two. We really should only need to do this if we've messed up our S3 bucketthe web app, auth-staging for auth, etc. Point these to the elastic beanstalk URL, which should be something of the form stackName-componentName.elasticbeanstalk.com.
Verify Deployment
To verify deployment, run top-level queries against the repository instances from an authenticated account.
...
from /usr/local/atlasssian-crowd-2.2.7/. Use the aforementioned "ps -ef..." command to make sure no Crowd java process is running. If necessary, 'kill' lingering instances before running "start_crowd.sh". It's important not to have multiple instances of the java process runnning.
How to run the Data Loader (Depricated)
We should be migrating data and maintaining it between version upgrades now.
Once environments are running, you can populate the system with a set of starting data. On one of the local servers, goto /work/platform/DatasetMetadataLoader and execute the following:# Make sure you have the latest version
svn up
- Execute the loader
- Replace <repo_instance> and <auth_instance> by the repository and authentication instances.
- Either make sure that <platform_admin_email> is a Synapse administrator on crowd, or replace it by a Synapse administrator account
python datasetCsvLoader.py -e http://<repo_instance>/repo/v1
-a http://<auth_instance>/auth/v1 -u <platform_admin_email> -p <platform_admin_pw>
This will create a publicly-accessible project called Sage BioCuration, and populate it with curated data from Sage's repository data team.
If you need to repopulate the data in S3, pass the -3 argument to the data loader. It upload the data in serial right now so it takes an hour or two. We really should only need to do this if we've messed up our S3 bucket.