Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Introduction

Auditing data for the Synapse REST API is captured by a Spring Interceptor: AccessInterceptor that is similar to web filter.  This interceptor is configured to listen to all web services calls made to the repository services.  For each call, the AccessInterceptor will gather data to fill out an AccessRecord model object.  The AccessRecord data is then written as zipped CSV files directly to the  prod.access.record.sagebase.org  S3 bucket.  These CSV files are initially too small to process efficiently so a worker process merges the files by hour.

AccessRecord S3 Files

All AccessRecord CSV data for a single hour from all EC2 instance of a stack are into a single file.  The following is an example of the resulting path:

...

returnObjectId

elapseMStimestampviahostthreadIduserAgentqueryStringsessionIdxForwardedForrequestURLuserIdorigindatemethodvmIdinstancestacksuccess
 431379430033942 repo-prod-13.prod.sagebase.org659Synpase-Java-Client/2013-09-13-e70558e-662 Synapse-Web-Client/13.0 b4331f55-6c65-4f6b-a2c5-ee6830cf7641 /repo/v1/entity/header273978 2013-09-17POSTeca4eb39c13ac98c:7461944:1412973c3ee:-7ffd13prodtrue
 301379430033943 repo-prod-13.prod.sagebase.org656Jakarta Commons-HttpClient/3.1query=select+id,name,nodeType+from+entity+where+parentId+==+%22syn2228808%22+limit+500+offset+1b1b9c385-4dba-4e3a-b49c-0f40f7c99ac5 /repo/v1/query273978 2013-09-17GETeca4eb39c13ac98c:7461944:1412973c3ee:-7ffd13prodtrue
 141379430034027 repo-prod-13.prod.sagebase.org1177Synpase-Java-Client/2013-09-13-e70558e-662 Synapse-Web-Client/13.0mask=64597767ef-8ff2-40d0-a65d-b519f5b2f937 /repo/v1/entity/syn2228808/bundle273978 2013-09-17GET9b5a47b65e8703f0:229cd7a3:1412973c18a:-7ffd13prodtrue
syn2228807351379430034057 repo-prod-13.prod.sagebase.org159Synpase-Java-Client/2013-09-13-e70558e-662 Synapse-Web-Client/13.0 e9b15054-dbc6-454b-a1bf-8bef3d5f0fbc /repo/v1/entity/syn2228808/benefactor273978 2013-09-17GET9b5a47b65e8703f0:229cd7a3:1412973c18a:-7ffd13prodtrue
syn2228807191379430034107 repo-prod-13.prod.sagebase.org153Synpase-Java-Client/2013-09-13-e70558e-662 Synapse-Web-Client/13.0 23216ee3-dade-43ac-8efe-fa1e6dc9877d /repo/v1/entity/syn2228807/acl273978 2013-09-17GET9b5a47b65e8703f0:229cd7a3:1412973c18a:-7ffd13prodtrue
59638391379430034123 repo-prod-13.prod.sagebase.org656

Synpase-Java-Client/2013-09-13-e70558e-662 Synapse-Web-Client/13.0

 d7ed19dd-2ed9-47d1-b345-be1aaca0d688 /repo/v1/entity/syn2228808/wiki273978 2013-09-17GETeca4eb39c13ac98c:7461944:1412973c3ee:-7ffd13prodtrue

Column Description

  • returnedObjectId - For any method that returns an object with an ID, this column will contain the returned ID.  This is the only way to determine the ID of a newly created object from a POST.
  • elaseMS - The elapse time of the call in milliseconds.
  • timestamp - The exact time the call was made in epoch time (milliseconds since 1/1/1970).
  • via - The value of the "via" header (see: http://en.wikipedia.org/wiki/List_of_HTTP_header_fields).
  • host - The value of the "host" header (see: http://en.wikipedia.org/wiki/List_of_HTTP_header_fields).
  • threadId- The ID of the thread used to process the request.
  • userAgent - The value of the "User-Agent" header (see: http://en.wikipedia.org/wiki/List_of_HTTP_header_fields).
  • queryString - The value of the request queryString.
  • sessionId - For each call a new UUID is generated for the sessionId.  The sessionId is also bound to the logging thread context and written in all log entries.  This ties access records to log entries.
  • xForwardedFor - The value of the "X-Forwarded-For" header (see: http://en.wikipedia.org/wiki/List_of_HTTP_header_fields).
  • resquestURL - The URL of the request.
  • userID - For calls where the users is authenticated via a sessionToken or an API key, this column will contain the numeric ID of the user.
  • origin - The value of the "Origin" header (see: http://en.wikipedia.org/wiki/List_of_HTTP_header_fields).
  • date - The year-month-day date string.
  • method - The HTTP method: GET, POST, PUT, DELETE
  • vmId - When each EC2 instances of a stack starts, a new unique identifier for the JVM is issued.  This is captured in the access log so calls form a single machine can be grouped together.
  • instance - The instance number of the stack.
  • stack - The stack identifier.  This will always be "prod" for production stacks.
  • success - Set to true when a call complete without an exception, otherwise set to false.  The stack trace of exceptions can be found by searching the logs for the the sessionId of any failed access records.

Audit Analysis

Analysis of the audit data can be done using EMR-Hive.  The AWS extensions to Hive include support for reading zipped CSV data directly from S3.  This means we can launch an Elastic Map Reduce (EMR) Hive cluster and copy all access record data from S3 to the Hive cluster.  Once the data is loaded on the cluster add-hock and canned quires can be executed to generate reports or discover new trends.  The following outlines how to get started with the analysis of the Synapse Audit data.

Launch a EMR Hive Cluster

Launch a new EMR Hive cluster (see Launch a Hive Cluster for more information).

...

Choose the number and types of the EC2 instances for the cluster. There must be at least one "Master Instance Group" instance and two "Core Instance Group" instances. Request Spot Instance.

Setup the Amazon EC2 Key Pair to "prod-key-pair".  You will then need to use the prod-key-pair to SSH into the cluster.

...

Once the cluster is launched and active you can find the "Master Public DNS Name" needed to SSH into the cluster:

SSH into Hive Cluster

Once the cluster is active SSH into the master node using the prod-key-pair (see: AccessingInstancesLinux for more information).  Note: the user name must be "hadoop":

...

Code Block
hadoop@ip-10-28-72-37:~$ hive

Creating Hive Tables

Once you have an interactive Hive session with the master node of the cluster you are ready to setup the tables that will be used for analysis.  First we must create the external table for access record CSV data in S3:

...

Code Block
insert overwrite table access_record_local select 
returnObjectId,
elapseMS,
timestamp,
via,
host,
threadId,
userAgent,
queryString,
sessionId,
xForwardedFor,
requestURL,
userId,
origin,
date,
method,
vmId,
instance,
stack,
success from access_record_s3 where datep > '2013-09-14';

 

 The above statement produced the following:

Code Block
MapReduce Total cumulative CPU time: 59 seconds 600 msec
Ended Job = job_201309172347_0005
Counters:
Loading data to table default.access_record_local
Deleted hdfs://10.28.72.37:9000/mnt/hive_0110/warehouse/access_record_local
Table default.access_record_local stats: [num_partitions: 0, num_files: 3, num_rows: 0, total_size: 521583756, raw_data_size: 0]
1833436 Rows loaded to access_record_local
MapReduce Jobs Launched:
Job 0: Map: 72   Cumulative CPU: 219.31 sec   HDFS Read: 19440 HDFS Write: 521583756 SUCCESS
Job 1: Map: 3   Cumulative CPU: 59.6 sec   HDFS Read: 521594463 HDFS Write: 521583756 SUCCESS
Total MapReduce CPU Time Spent: 4 minutes 38 seconds 910 msec
OK
Time taken: 329.334 seconds
hive>

Running a similar count query against the local table:

Code Block
select count(*) from access_record_local;

Produced:

Code Block
MapReduce Total cumulative CPU time: 24 seconds 180 msec
Ended Job = job_201309172347_0006
Counters:
MapReduce Jobs Launched:
Job 0: Map: 3  Reduce: 1   Cumulative CPU: 24.18 sec   HDFS Read: 521592826 HDFS Write: 8 SUCCESS
Total MapReduce CPU Time Spent: 24 seconds 180 msec
OK
1833436
Time taken: 65.007 seconds, Fetched: 1 row(s)
hive>

For this example, the count query against the local table ran twice as fast as the same query against the external table (65 secs vs 175 secs).

Running Analysis

Once the tables are setup an populated with data, the real analysis can start.  Here is an example query used to the find the distinct "userAgent" strings used to make calls and the counts for each:

Code Block
select count(userAgent), userAgent from access_record_local group by userAgent;

Here are the results from the above query:

Code Block
MapReduce Total cumulative CPU time: 31 seconds 120 msec
Ended Job = job_201309172347_0007
Counters:
MapReduce Jobs Launched:
Job 0: Map: 3  Reduce: 1   Cumulative CPU: 31.12 sec   HDFS Read: 521592826 HDFS Write: 2563 SUCCESS
Total MapReduce CPU Time Spent: 31 seconds 120 msec
OK
47319
8527    "Jakarta Commons-HttpClient/3.1"
285949  "Synpase-Java-Client/12.0-2-ge17e722  Synapse-Web-Client/12.0-4-gafe76ad "
6615    "Synpase-Java-Client/12.0-2-ge17e722"
6767    "Synpase-Java-Client/13.0"
508     "Synpase-Java-Client/develop-SNAPSHOT  JayClearingPreviews"
1498    "Synpase-Java-Client/develop-SNAPSHOT  Synapse-Web-Client/develop-SNAPSHOT"
145     "Synpase-Java-Client/develop-SNAPSHOT"
1       "python-requests/1.2.0 CPython/2.7.2 Darwin/11.4.2"
1190    "python-requests/1.2.0 CPython/2.7.4 Darwin/11.4.2"
76      "python-requests/1.2.3 CPython/2.7.2 Darwin/12.4.0"
9       "python-requests/1.2.3 CPython/2.7.3 Linux/3.2.0-36-virtual"
16      "python-requests/1.2.3 CPython/2.7.3 Linux/3.5.0-34-generic"
176     "python-requests/1.2.3 CPython/2.7.3 Linux/3.9.10-100.fc17.x86_64"
32      "python-requests/1.2.3 CPython/2.7.4 Darwin/12.4.0"
1223263 "python-requests/1.2.3 CPython/2.7.4 Linux/3.8.0-19-generic"
96      "python-requests/1.2.3 CPython/2.7.4 Linux/3.8.0-26-generic"
8464    "python-requests/1.2.3 CPython/2.7.5 Linux/3.10.10-200.fc19.x86_64"
2       "python-requests/1.2.3 CPython/2.7.5 Windows/7"
4       "synapseRClient/0.26"
494     "synapseRClient/0.27"
691     "synapseRClient/0.28"
126     "synapseRClient/0.29-1"
238765  "synapseRClient/0.30-1"
1458    "synapseRClient/0.31-1"
2       "synapseRClient/0.31-2"
1131    "synapseRClient/0.31-3"
Time taken: 80.385 seconds, Fetched: 43 row(s)
hive>

Cleanup

Do not forget to terminate the cluster you created when you are finished with it.

Image Added