Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

1) add a filter to the repo service to record: timestamp, uri, headers, request body (need to consider it's size)?, source of the request (web client, R client, cURL) +  a unique ID for the request (for the whole transaction)  (Note:  Java and R clients need to set user agent field to identify themselves.  Something like 'user agent' could be a required parameter of the Java client methods.  The identity should include the stack.)

2) at the database/DAO  level:  record the SQL, user, and timestamp, transaction ID

...

where do we store this data?  ideas:  (1) log file, (2) dynamoDB

noteNote:  dynamoDB has ability to archive content to S3

might need tools that would digest the captured information to answer common questions like ("When did this object change?")

Note:  Might be sufficient just to capture the web request, without capturing the database level activity.

What information should be captured and what is the mechanism? 

Info should be captured to logs (local disk files) which are processed and moved to other system(s) (e.g. DynamoDb) for indexing.

 

Per Eric:  Could capture each request in an S3 file (one file per request), putting time stamp, entity id or other key information in file name.  Use MapReduce to efficiently process files to answer queries.

John adds:  The S3 files created during migration could be retained as a "log" of migrator activity.  (Bruce says, the files would be used with the migration requests to create the actual history.)

While PLFM logs incoming requests, Portal could log the requests it thinks it made.

 

Next Steps:

1) Capture individual requests (and responses) to the repo-svcs as S3 files.

2) Capture Catalina logs (e.g. to get stack traces associated with 500 status codes).