I'd like to preface this by saying I have read at least 10 related Serverfault questions before resorting to making my own...
I am currently running a Ubuntu 14.04.3 server with 2GB of RAM and about 5 active WordPress installations, all managed under the Vesta CP control panel.
Normally, it uses up about 700MB of the 2GB. But every week or so, all of the RAM becomes magically consumed, and the server slows down to almost a halt.
If I SSH into it and restart apache, as well as clear the memory (echo 3 > /proc/sys/vm/drop_caches), it starts functioning again just fine.
Here's my prefork module settings, which I feel are very reasonable:
<IfModule mpm_prefork_module> StartServers 5 MinSpareServers 1 MaxSpareServers 5 ServerLimit 10 MaxClients 10 MaxRequestsPerChild 1000 </IfModule> I even enabled mod_status and tried to see what PHP files were taking too long, but didn't find anything suspicious. Of course, when I look at the log while the server is down, it is flooded with at least 200 PHP files because they are unable to run due to the massive memory consumption.
I even enabled an 8GB SWAP file but that seems to have just delayed the inevitable.
Here's what the free -m command pulls up every time:
root@apache2-ps7881:/home/dhc-user# free -m total used free shared buffers cached Mem: 2001 1943 57 35 1 59 -/+ buffers/cache: 1883 118 Swap: 8191 4083 4108 After restarting apache:
root@apache2-ps7881:/etc/apache2# free -m total used free shared buffers cached Mem: 2001 744 1257 65 36 204 -/+ buffers/cache: 503 1498 Swap: 8191 140 8051 Here's the /var/log/apache2/error.log:
[Fri Feb 12 08:22:33.063204 2016] [mpm_prefork:error] [pid 2081] AH00161: server reached MaxRequestWorkers setting, consider raising the MaxRequestWorkers setting [Fri Feb 12 13:12:59.819680 2016] [core:warn] [pid 2081] AH00045: child process 6334 still did not exit, sending a SIGTERM That "child process still did not exit" error goes on for hundreds of more lines.
I get the server reached MaxRequestWorkers setting, consider raising the MaxRequestWorkers setting message every time it goes down.
Another error.log reveals the following:
[Fri Feb 12 08:19:55.781598 2016] [:error] [pid 20686] [client 10.10.10.9:54559] PHP Warning: mysqli_connect(): (08004/1040): Too many connections in /[censored]/$ [Fri Feb 12 08:19:55.896491 2016] [:error] [pid 20686] [client 10.10.10.9:54559] Too many connections, referer: http://[censored] Could it be that there's a connection not being closed? But would that cause a memory leak?
Here's an example of what the graphs reveal during the crashes:




top, I will have to wait until it happens again.