I am running an API server on Amazon EC2 powered by PHP, and am trying to finetune the performance with a goal of reducing response times. Currently I am testing the performance of the PHP code internally by getting the millisecond time at the start of a request, and then getting the milliseconds at the end of the request just before ending the script. After implementing memcache and a few other tricks I've gotten most requests reporting sub 10-20ms execution time within my PHP code. However, when I look at the network requests in Chrome's Developer Tools I still see requests that hit my PHP scripts showing up as:
Sending 0ms Waiting 145ms Receiving 1ms This compares with requests for static files which are usually more along the lines of:
Sending 0ms Waiting 15ms Receiving 1ms So all I know is that when I run my PHP script and test execution time within the script it only takes 10-20ms to execute my script. At this point I've taken care of the low hanging fruit of making my PHP code run very fast, but I'm trying to figure out how to make the rest of the flow outside the execution of my script faster.
So my question is how to I determine where the extra 120ms comes from since it is outside the flow of my own code? Is it time spent by Apache setting up a PHP process? Is it time spent by the PHP interpreter parsing the PHP files? Is it time spent by the PHP process delivering the response from my script back to Apache? How do I figure out how that 120ms of extra time is distributed with an aim to reducing it?