dleThe following provides instructions on how to log on to the Sage Scientific Compute workspace using your Synapse credentials, and how to use the products provided in the AWS Service Catalog to setup or modify EC2 instances and S3 buckets.
...
Department: your department within Sage Bionetworks.
Project: the project that this instance will be used for. Please make a request to #sageit if your project is not in the list.
CostCenter: bill the product to this cost center code. If the appropriate cost center code is not in the list then select “Other / 000001” and create a custom “CostCenterOther” tag. Set the tag to a value from our official list of cost centers codes. Example:
...
Note: You can add additional custom tags when provisioning resources however there are 3 reserved tags that you should avoid adding: Department, Project, CostCenter and OwnerEmail. The owner email tag is automatically set to <Synapse Username>@synapse.org
Notifications
Please skip the Notifications pane. SNS notifications are not operational at this time.
...
The AWS SSM allows direct access to private instances from your own computer terminal. To setup access with the AWS SSM we need to create a special Synapse personal access token (PAT) that will work with the Sage Service Catalog. This is special PAT that can only be created using this workflow, creating a PAT from the Synapse personal token manager web page will NOT work.
Request a Synapse PAT by visiting https://sc.sageit.org/personalaccesstoken , for Sage employees, or https://ad.strides.sc.sageit.org/personalaccesstoken for AMP-AD members. (You may need to login to Synapse.) If you have already created a PAT through this mechanism and are repeating the process you must first visit the token management page in Synapse and delete the existing one with the same name.
After logging into Synapse a file containing the PAT, which is a long character string (i.e. eyJ0eXAiOiJ...Z8t9Eg), is returned to you. Save the file to your local machine and note the location where you saved it to then close the browser session.
Note: At this point you can verify that the PAT for the Service Catalog was successfully created by viewing the Synapse token management page. When the PAT expires you will need to repeat these steps to create a new PAT. The PAT should look something like this
...
To setup access the AWS EC2 instances with the AWS SSM we need to install the AWS CLI and make it source credentials with an external process.
Install the AWS CLI version 2 (SSM access will not work with ver 1.x)
Install SSM session manager plugin
Create a synapse credentials script.
Linux/Mac: synapse_creds.sh with content below. Add the execute permission to thesynapse_creds.sh
file (i.e.chmod +x synapse_creds.sh
)Code Block #!/usr/bin/env bash # Inputs SC_ENDPOINT=$1 # i.e. https://sc.sageit.org SYNAPSE_PAT=$2 # The Synapse Personal Access Token # Endpoints STS_TOKEN_ENDPOINT="${SC_ENDPOINT}/ststoken" # Get Credentials AWS_STS_CREDS=$(curl --location-trusted --silent -H "Authorization:Bearer ${SYNAPSE_PAT}" ${STS_TOKEN_ENDPOINT}) echo ${AWS_STS_CREDS}
Windows: synapse_creds.bat with content below.Code Block @ECHO OFF REM Inputs REM %~1 The SC endpoint i.e. https://sc.sageit.org REM %~2 The Synapse Personal Access Token REM Use inputs to get credentials for /f %%i in ('curl --location-trusted --silent -H "Authorization:Bearer %~2" "%~1/ststoken"') do set AWS_STS_CREDS=%%i ECHO %AWS_STS_CREDS%
Open the file containing the Synapse PAT and copy the long character string (i.e. eyJ0eXAiOiJ...Z8t9Eg).
Create a ~/.aws/config file if you don’t already have one. Add the following to your
~/.aws/config
file, replacing<PERSONAL_ACCESS_TOKEN>
with the PAT you downloaded then set the/absolute/path/to/synapse_creds.sh
to the location of thesynapse_creds.sh
orsynapse_creds.bat
file.Code Block [profile service-catalog] region=us-east-1 credential_process = "/absolute/path/to/synapse_creds.sh" "https://sc.sageit.org" "<PERSONAL_ACCESS_TOKEN>"
Go to the service catalog provisioned product page. Scroll down to get your
EC2InstanceId
under the PROVISION_PRODUCT event. TheEC2InstanceId
is your--target
.Run the SSM start-session command to access the instance. Note: Windows users should do this in command prompt. In the following example, the
EC2InstanceId
from the previous step isi-0fd5c9ff0ef675ceb
.Code Block ➜ aws ssm start-session --profile service-catalog \ --target i-0fd5c9ff0ef675ceb Starting session with SessionId: 3377358-0cab70190f97fcf78 sh-4.2$
Note: by default you are logged in as the ssm-user. If you prefer to start your session with a different user then you can try running SSM access with custom commands.
...
Setting SSH connection with ec2-user:
Use the ssm start-session command to connect to the instance
Code Block aws ssm start-session --profile service-catalog \ --target i-0fd5c9ff0ef675ceb \ --document-name AWS-StartInteractiveCommand \ --parameters command="sudo su - ec2-user"
Copy the public portion of your ssh key (on your local computer) to the instance’s
~/.ssh/authorizedkeys
file.Set the permission of the authorizedkeys file to 600 (i.e. chmod 600 ~/.ssh/authorizedkeys)
Add the following to your local machine’s
~/.ssh/config
fileCode Block # SSH over Session Manager host i-* mi-* ProxyCommand sh -c "aws ssm start-session --target %h --document-name AWS-StartSSHSession --parameters 'portNumber=%p'"
From your local machine, execute the ssh command to access the instance
Code Block ➜ AWS_PROFILE=service-catalog ssh -i ~/.ssh/id_rsa ec2-user@i-0fd5c9ff0ef675ceb Last login: Thu Jun 17 21:25:56 2021 __| __|_ ) _| ( / Amazon Linux 2 AMI ___|\___|___| https://aws.amazon.com/amazon-linux-2/ [ec2-user@ip-10-41-23-76 ~]$
From your local machine, execute the scp command to copy files directly to the instance
Code Block ➜ AWS_PROFILE=service-catalog scp -i ~/.ssh/id_rsa README.md ec2-user@i-07eeb59282fafe244:~/. README.md 100% 814 9.2KB/s 00:00
SSM access to applications
When running apps in the instance you may want to run the apps on specific ports. The AWS SSM allows you to expose those ports to your local computer using a technique called port forwarding. Here’s an example of how to enable port forwarding to an application:
Run an application on the EC2 (i.e. docker run -p 80:80 httpd)
Code Block [ec2-user@ip-10-49-26-50 ~]$ docker run -p 80:80 httpd Unable to find image 'httpd:latest' locally latest: Pulling from library/httpd 33847f680f63: Pull complete d74938eee980: Pull complete 963cfdce5a0c: Pull complete 8d5a3cca778c: Pull complete e06a573b193b: Pull complete Digest: sha256:71a3a8e0572f18a6ce71b9bac7298d07e151e4a1b562d399779b86fef7cf580c Status: Downloaded newer image for httpd:latest AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message [Thu Jul 22 23:54:12.106344 2021] [mpm_event:notice] [pid 1:tid 140706544895104] AH00489: Apache/2.4.48 (Unix) configured -- resuming normal operations [Thu Jul 22 23:54:12.107307 2021] [core:notice] [pid 1:tid 140706544895104] AH00094: Command line: 'httpd -D FOREGROUND'
To provide access to that app, an SC user can use the port forwarding feature to gain access to the app by running the AWS SSM CLI command:
Code Block aws ssm start-session --profile service-catalog \ --target i-0fd5c9ff0ef675ceb \ --document-name AWS-StartPortForwardingSession \ --parameters '{"portNumber":["80"],"localPortNumber":["9090"]}'
To provide access to that app in the Windows Command Prompt use this syntax:
Code Block aws ssm start-session --profile service-catalog \ --target i-0fd5c9ff0ef675ceb \ --document-name AWS-StartPortForwardingSession \ --parameters "{\"portNumber\":[\"80\"],\"localPortNumber\":[\"9090\"]}"
Now you should be able to access that app on your local machine at
http://localhost:9090
.
Connecting to Windows Instances
...
Connecting to the Windows instance’s shell is similar to accessing a linux instance’s shell. Just follow instructions in SSM access to an Instance.
Connect to Windows desktop using SSM session manager
Connecting to the Windows desktop requires a few more steps.
Connect to the Windows shell.
Create a new user and and it to the “Administrators” group
Code Block $Password = ConvertTo-SecureString "P@ssW0rD!" -AsPlainText -Force New-LocalUser "admin" -Password $Password Add-LocalGroupMember -Group "Administrators" -Member "admin"
Follow the SSM access to applications instructions to setup port forwarding to windows RDP
Code Block aws ssm start-session --profile service-catalog \ --target i-0fd5c9ff0ef675ceb \ --document-name AWS-StartPortForwardingSession \ --parameters '{"portNumber":["3389"],"localPortNumber":["3389"]}'
Install the Microsoft Remote Desktop client on your computer.
Click “+” to add a new PC. In the “PC Name” field, enter “localhost”.
Log in with username “admin” and password "P@ssW0rD!"
Connect to Windows desktop using VPN and Jumpcloud (Sage Staff Only)
Sage staff have the option to access the windows desktop using their Jumpcloud credentials. Here are the steps
Once an instance is provisioned locate its instance id (i.e. i-06531e8f977ca20ea)
Create a Jira IT issue and make a request to associate your jumpcloud user with that instance id
Once Sage IT will make the association you can login to the VPN and use remote desktop to login to the instance with your Jumpcloud credentials.
Provisioning and Using a Notebook
...
Using the update action allows you to change parameters or update to a new version of the product. WARNING: changes to configuration parameters usually result in a recreation (“replacement”) of the instance, any data saved on the instance will be lost, and the nature of the update by Amazon is difficult to predict. We recommend that you save any important data to S3, provision a new instance and terminate the original.
Terminate
The terminate action deletes the instance permanently.
...
The “Environment” parameters are required fields. You can replace the default values, however please do not leave these fields empty. Also pay special attention to the formatting that’s required for the values. The deployment will fail if the formatting isn’t correct.
There is an AWS bug that prevents disabling the scheduled job after it has been enabled. The workaround is to either (1) Terminate the job and create a new one or (2) Set the rate to some distant time in the future (i.e. 3650 days).
...
Secrets are stored in the AWS secrets manager and exposed to the job as environment variables. The logs above print out the environment variables from the job. Take note of the “SCHEDULED_JOB_SECRETS” parameter in the logs. The secrets that are passed into this product are exposed as environment variables in the logs by the “printenv” command. Please make sure to never expose secrets in this way. DO NOT PRINT ENVIRONMENT VARIABLES.
Accessing Scheduled Job Secrets
Job secrets can be access a number of different ways. The first way is simply to get it from the docker container environment variable SCHEDULED_JOB_SECRETS.
Environment variable example:
...
You can use a scheduled job to access data in Synapse. To do so:
Create a personal access token (PAT) as explained here.
In the Secrets parameter of your scheduled job, include the PAT, e.g.
Code Block "PAT":"eyJ0eXAiOiJKV1QiLCJraWQiOiJXN05OOldMSlQ..."
You may include other secrets if needed, each name/value pair separated by a comma.
Add code in your containerized script to parse the “
SCHEDULED_JOB_SECRETS
" JSON object, extract the PAT and use it to authenticate. Here is a Python example, synapse_login.py:Code Block import json, os, synapseclient secrets=json.loads(os.getenv("SCHEDULED_JOB_SECRETS")) auth_token = secrets["PAT"] syn=synapseclient.Synapse() syn.login(authToken=auth_token)
Create a container image which includes the Synapse client, this script, and the command to invoke the script. Here is a Dockerfile to create such a container image:
Code Block FROM sagebionetworks/synapsepythonclient COPY synapse_login.py synapse_login.py CMD python3 synapse_login.py
Now you can build the container image, push to a public Docker registry and use in your Scheduled Job.
...