Use connection pooling for Python client's requests
I was using cProfiler with the Python client to help diagnose and discovered that the python client was spending about 1-2 seconds per request performing an SSL handshake.
While talking with , he suggested that I configure the requests library to use connection pooling when contacting the Synapse backend. This change would allow the Python client to reuse the same connection for successive requests and reduce the amount if time it takes for each API calls to complete after the SSL handshake is completed.
That is a substantial improvement – , thank you for validating. Exciting!
I see an improvement running this version versus the current version in PyPi (1.7.5) when running subsequent calls to Synapse. A 'syn.get' goes from 0.5s to 0.25s, a 'syn.tableQuery' goes from 0.9s to 0.4s.
Would you be willing to confirm this improvement?
Compare how long it takes to make multiple calls to Synapse. The develop branch should be faster than the current release. I like to use cProfile.run() for timing. If your script creates asynchronous jobs in Synapse (e.g. querying tables), results will cached so make sure you do the time comparison AFTER already calling it once so that during the comparison both the develop branch and current release are getting cached results.