Skip to end of banner
Go to start of banner

Phase Algorithm

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 17 Next »

On This page

On Related Pages

The selected root page could not be found.

In this example, we are not using MapReduce to its full potential. We are only using it to run jobs in parallel, one job for each chromosome. The phase algorithm from UW writes its output to local files instead of stdout. Many currently existing exe or apps can be run this way either as standard alone or pipeline.

Mapper

  1. Write the mapper script
    ~>cat phaseMapper.sh
    #!/bin/sh
    
    RESULT_BUCKET=s3://sagetest-YourUsername/results
    
    # Send some bogus output to stdout so that mapreduce does not timeout
    # during phase processing since the phase algorithm does send output
    # to stdout on a regular basis
    perl -e 'while(! -e "./timetostop") { print "keepalive\n"; print STDERR "reporter:status:keepalive\n"; sleep 300; }' &
    
    while read S3_INPUT_FILE; do
        echo input to process ${S3_INPUT_FILE} 1>&2
    
        # For debugging purposes, print out the files cached for us
        ls -la 1>&2
    
        # Parse the s3 file path to get the file name
        LOCAL_INPUT_FILE=$(echo ${S3_INPUT_FILE} | perl -pe 'if (/^((s3[n]?):\/)?\/?([^:\/\s]+)((\/\w+)*\/)([\w\-\.]+[^#?\s]+)(.*)?(#[\w\-]+)?$/) {print "$6\n"};' | head -1)
    
        # Download the file from S3
        echo hadoop fs -get ${S3_INPUT_FILE} ${LOCAL_INPUT_FILE} 1>&2
        hadoop fs -get ${S3_INPUT_FILE} ${LOCAL_INPUT_FILE} 1>&2
    
        # Run phase processing
        ./phase ${LOCAL_INPUT_FILE} ${LOCAL_INPUT_FILE}_out 100 1 100
    
        # Upload the output files
        ls -la ${LOCAL_INPUT_FILE}*_out* 1>&2
        for f in ${LOCAL_INPUT_FILE}*_out*
        do
            echo hadoop fs -put $f ${RESULT_BUCKET}/$LOCAL_INPUT_FILE/$f 1>&2
            hadoop fs -put $f ${RESULT_BUCKET}/$LOCAL_INPUT_FILE/$f 1>&2
        done
        echo processed ${S3_INPUT_FILE} 1>&2
        echo 1>&2
        echo 1>&2
    done
    
    # Tell our background keepalive task to exit
    touch ./timetostop
    
    exit 0
    
  2. Upload the mapper script to S3 via the AWS console or s3curl
    /work/platform/bin/s3curl.pl --id $USER --put phaseMapper.sh https://s3.amazonaws.com/sagetest-$USER/scripts/phaseMapper.sh
    
  3. Upload the phase binary to S3 too
    /work/platform/bin/s3curl.pl --id $USER --put PHASE https://s3.amazonaws.com/sagetest-$USER/scripts/phase
    

Reducer

We do not need a reducer for this task. It is merely the output of the phase algorithm that we want. Therefore in the job configuration be sure to set "-jobconf", "mapred.reduce.tasks=0"

Input

  1. Write your input file
    ~>cat phaseInput.txt
    s3://sagetest-YourUsername/input/ProSM_chrom_MT.phase.inp
    ... many more files, one per chromosome
    
  2. Upload your input file to S3 via the AWS console or s3curl
    /work/platform/bin/s3curl.pl --id $USER --put phaseInput.txt https://s3.amazonaws.com/sagetest-$USER/input/phaseInput.txt
    
  3. Also upload all the data files referenced in phaseInput.txt to the location specified in that file.

Run the MapReduce Job

Job Configuration

  1. Write your job configuration. Note that you need to change the output location each time you run this!
    ~>cat phase.json
    [
        {
            "Name": "MapReduce Step 1: Run Phase",
            "ActionOnFailure": "CANCEL_AND_WAIT",
            "HadoopJarStep": {
                "Jar": "/home/hadoop/contrib/streaming/hadoop-streaming.jar",
                    "Args": [
                        "-input",     "s3n://sagetest-YourUsername/input/phaseInput.txt",
                        "-output",    "s3n://sagetest-YourUsername/output/phaseTry1",
                        "-mapper",    "s3n://sagetest-YourUsername/scripts/phaseMapper.sh",
                        "-cacheFile", "s3n://sagetest-YourUsername/scripts/phase#phase",
                        "-jobconf",   "mapred.reduce.tasks=0",
                        "-jobconf",   "mapred.task.timeout=604800000",
                    ]
                }
        }
    ]
    
  2. Put it on one of the shared servers sodo/ballard/belltown.

If you find that your mapper tasks are not getting balanced evenly across your fleet, you can add lines like the following to your job config:

"-jobconf", "mapred.map.tasks=26", 
"-jobconf", "mapred.tasktracker.map.tasks.maximum=2",

Start the MapReduce cluster

  1. ssh to one of the shared servers sodo/ballard/belltown
  2. Kick of the Elastic Map Reduce Job. This will start 14 hosts: one for the master and 13 for the slaves running the map tasks.
    ~>/work/platform/bin/elastic-mapreduce-cli/elastic-mapreduce --credentials ~/$USER-credentials.json --create \
    --enable-debugging --bootstrap-action s3://elasticmapreduce/bootstrap-actions/configurations/latest/memory-intensive \
    --master-instance-type=m1.small --slave-instance-type=c1.medium --num-instances=14 --json phase.json --name phaseTry1
    
    Created job flow j-GA47B7VD991Q
    

Check on the job status

If something is misconfigured, it will fail in a minute or two. Check on the job status and make sure it is running.

~>/work/platform/bin/elastic-mapreduce-cli/elastic-mapreduce --credentials
~/$USER-credentials.json --list --jobflow j-GA47B7VD991Q

j-GA47B7VD991Q     RUNNING        ec2-174-129-134-200.compute-1.amazonaws.com       filesysTry1
   RUNNING        MapReduce Step 1: Run Phase

If there were any errors, make corrections and resubmit the job step

~>/work/platform/bin/elastic-mapreduce-cli/elastic-mapreduce --credentials ~/$USER-credentials.json --json phase.json --jobflow j-GA47B7VD991Q
Added jobflow steps

Get your results

Look in your S3 bucket for the results.

  • No labels