What auditing/logging should we put in place to be able to "sleuth" problems that users report in Synapse?
1) add a filter to the repo service to record: timestamp, uri, headers, request body (need to consider it's size)?, source of the request (web client, R client, cURL) + a unique ID for the request (for the whole transaction) (Note: Java and R clients need to set user agent field to identify themselves. Something like 'user agent' could be a required parameter of the Java client methods. The identity should include the stack.)
2) at the database/DAO level: record the SQL, user, and timestamp, transaction ID
(note: there is 'trigger based' database logging in MySQL. downside is the add'l load on the database)
might not need auditing on all tables, just primary ones
where do we store this data? ideas: (1) log file, (2) dynamoDB
note: dynamoDB has ability to archive content to S3
might need tools that would digest the captured information to answer common questions like ("When did this object change?")
What information should be captured and what is the mechanism?
Info should be captured to logs (local disk files) which are processed and moved to other system(s) (e.g. DynamoDb) for indexing.