The following provides instructions on how to log on to the Sage Scientific Compute workspace using your Synapse credentials, and how to use the products provided in the AWS Service Catalog to setup or modify EC2 instances and S3 buckets.
...
Note: You can add additional custom tags when provisioning resources however there are 3 reserved tags that you should avoid adding: Department, Project, and OwnerEmail. The owner email tag is automatically set to <Synapse Username>@synapse.org
Notifications
Please skip the Notifications pane. SNS notifications are not operational at this time.
...
The AWS SSM allows direct access to private instances from your own computer terminal. To setup access with the AWS SSM we need to create a special Synapse personal access token (PAT) that will work with the Sage Service Catalog. This is special PAT that can only be created using this workflow, creating a PAT from the Synapse personal token manager web page will NOT work.
Request a Synapse PAT by visiting https://sc.sageit.org/personalaccesstoken , for Sage employees, or https://ad.strides.sc.sageit.org/personalaccesstoken for AMP-AD members. (You may need to login to Synapse.) If you have already created a PAT through this mechanism and are repeating the process you must first visit the token management page in Synapse and delete the existing one with the same name.
After logging into Synapse a file containing the PAT, which is a long character string (i.e. eyJ0eXAiOiJ...Z8t9Eg), is returned to you. Save the file to your local machine and note the location where you saved it to then close the browser session.
Note: At this point you can verify that the PAT for the Service Catalog was successfully created by viewing the Synapse token management page. When the PAT expires you will need to repeat these steps to create a new PAT. The PAT should look something like this
...
Run an application on the EC2 (i.e. docker run -p 80:80 httpd)
Code Block [ec2-user@ip-10-49-26-50 ~]$ docker run -p 80:80 httpd Unable to find image 'httpd:latest' locally latest: Pulling from library/httpd 33847f680f63: Pull complete d74938eee980: Pull complete 963cfdce5a0c: Pull complete 8d5a3cca778c: Pull complete e06a573b193b: Pull complete Digest: sha256:71a3a8e0572f18a6ce71b9bac7298d07e151e4a1b562d399779b86fef7cf580c Status: Downloaded newer image for httpd:latest AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message [Thu Jul 22 23:54:12.106344 2021] [mpm_event:notice] [pid 1:tid 140706544895104] AH00489: Apache/2.4.48 (Unix) configured -- resuming normal operations [Thu Jul 22 23:54:12.107307 2021] [core:notice] [pid 1:tid 140706544895104] AH00094: Command line: 'httpd -D FOREGROUND'
To provide access to that app, an SC user can use the port forwarding feature to gain access to the app by running the AWS SSM CLI command:
Code Block aws ssm start-session --profile service-catalog \ --target i-0fd5c9ff0ef675ceb \ --document-name AWS-StartPortForwardingSession \ --parameters '{"portNumber":["80"],"localPortNumber":["9090"]}'
To provide access to that app in the Windows Command Prompt use this syntax:
Code Block aws ssm start-session --profile service-catalog \ --target i-0fd5c9ff0ef675ceb \ --document-name AWS-StartPortForwardingSession \ --parameters "{\"portNumber\":[\"80\"],\"localPortNumber\":[\"9090\"]}"
Now you should be able to access that app on your local machine at
http://localhost:9090
.
...
Using the update action allows you to change parameters or update to a new version of the product. WARNING: changes to configuration parameters usually result in a recreation (“replacement”) of the instance, any data saved on the instance will be lost, and the nature of the update by Amazon is difficult to predict. We recommend that you save any important data to S3, provision a new instance and terminate the original.
Terminate
The terminate action deletes the instance permanently.
...
Note: Scheduled Jobs products currently are available only to Sage employees.
Schedule
...
Job Products
Scheduled jobs are essentials essentially cron jobs that we’ve setup to run in AWS batch. This is the product allows you to run an arbitrary task on the cloud to process your workload. The task must be run with using a docker image. The task can be manually triggered or it can be setup to run on a cron schedule.
Creating Scheduled
...
Job Products
To create an cron a scheduled job, select the “Products List” from the navigation panel on the left. Next, select “Scheduled Jobs” from the list. On the product page, click the orange “LAUNCH PRODUCT” button under the product description, then fill out the wizard. Most of the parameters contain helpful information describing the valid input valuesinputs.
Note: The “Environment” parameters are required fields. You can replace the default values however please do not leave these fields empty out the values. Also pay special attention to the formatting that’s required for those fields. If the values. The deployment will fail if the formatting isn’t correct the deployment will fail.
...
Manually
...
Run Scheduled Jobs
Once provisioning is complete your Scheduled Jobs product will appear in the “Provisioned Products” list, showing status Available. Select “Provisioned Product Details” from the navigation panel on the left, and click on your product. A product that has a “Succeeded” event will have outputs that include a “SubmitJobApi” link. That is a REST request to trigger the job and should return something like this..
...
View Scheduled Job Status
Click on the “Jobs” link to view the batch job queuestatus. The Once triggered the job should transition from STARTING → RUNNING → SUCCEEDED.
...
The job will send logs to AWS cloudwatch. To access the logs click on the “Logs” link in the PROVISION_PRODUCT outputs. Below is an example of a log for a job that ran the command “printenv”
...
Scheduled Job Secrets
Secrets are stored in the AWS secrets manager and exposed as environment variables. The logs above prints out the environment variables from the job. Take note of the “SCHEDULED_JOBS_SECRETS” parameter in the logs. The secrets that are passed into this product are exposed as environment variables. Please make sure to never expose secrets, DO NOT PRINT ENVIRONMENT VARIABLES.
Accessing Scheduled Job Secrets
To access the job secrets copy the “JobSecretArn” from the service catalog PROVISION_PRODUCT outputs. Then provide the JobSecretArn to either the AWS secretsmanager CLI or one of the AWS SDKs to retrieve the secret.
AWS secrets manager CLI Example :
Code Block |
---|
aws secretsmanager --output json get-secret-value --secret-id arn:aws:secretsmanager:us-east-1:465877038949:secret:JobSecrets-rEx1eKL9pokj-h7hCGO { "ARN": "arn:aws:secretsmanager:us-east-1:465877038949:secret:JobSecrets-rEx1eKL9pokj-h7hCGO", "Name": "JobSecrets-rEx1eKL9pokj", "VersionId": "09904c83-f4ea-4664-a773-eded857ab5a0", "SecretString": "{ \"SECRET1\":\"Shh1\" }", "VersionStages": [ "AWSCURRENT" ], "CreatedDate": "2021-12-18T09:01:23.690000-08:00" } |
Code Block |
---|
# Use this code snippet in your app. # If you need more information about configurations or implementing the sample code, visit the AWS docs: # https://aws.amazon.com/developers/getting-started/python/ import boto3 import base64 from botocore.exceptions import ClientError def get_secret(): secret_name = "arn:aws:secretsmanager:us-east-1:465877038949:secret:JobSecrets-rEx1eKL9pokj-h7hCGO" region_name = "us-east-1" # Create a Secrets Manager client session = boto3.session.Session() client = session.client( service_name='secretsmanager', region_name=region_name ) # In this sample we only handle the specific exceptions for the 'GetSecretValue' API. # See https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_GetSecretValue.html # We rethrow the exception by default. try: get_secret_value_response = client.get_secret_value( SecretId=secret_name ) except ClientError as e: if e.response['Error']['Code'] == 'DecryptionFailureException': # Secrets Manager can't decrypt the protected secret text using the provided KMS key. # Deal with the exception here, and/or rethrow at your discretion. raise e elif e.response['Error']['Code'] == 'InternalServiceErrorException': # An error occurred on the server side. # Deal with the exception here, and/or rethrow at your discretion. raise e elif e.response['Error']['Code'] == 'InvalidParameterException': # You provided an invalid value for a parameter. # Deal with the exception here, and/or rethrow at your discretion. raise e elif e.response['Error']['Code'] == 'InvalidRequestException': # You provided a parameter value that is not valid for the current state of the resource. # Deal with the exception here, and/or rethrow at your discretion. raise e elif e.response['Error']['Code'] == 'ResourceNotFoundException': # We can't find the resource that you asked for. # Deal with the exception here, and/or rethrow at your discretion. raise e else: # Decrypts secret using the associated KMS key. # Depending on whether the secret is a string or binary, one of these fields will be populated. if 'SecretString' in get_secret_value_response: secret = get_secret_value_response['SecretString'] else: decoded_binary_secret = base64.b64decode(get_secret_value_response['SecretBinary']) # Your code goes here. |
...