2

I am using a 3gb memory VPS Centos 6 server.If I reboot it I get about 4 or 5 httpd services running and all of them use about 2.5% of memory (86m on the res column from top command).

I am running just one website which is not live yet so I am the only one connecting to it.

However everyday I see that the httpd memory percentage goes up by 0.3 or 0.4 depends. Which means that after 4 or 5 days those httpd processes will be using about 4% of memory (130m on the res column from top command).I do not see any errors in the logs and everything works correctly but if I left the server without rebooting for 2 weeks I will run out of memory.

For example a way to reproduce it will be to use the ab command.For instance if I run:

ab -c 2000 -t 60 http://xxx.xxx.xxx.xxx/ 

After running it each of httpd services will be using about 0.3 or 0.4 more memory than before running the test.

Again I do not see any errors in the logs.

Is this normal?


I have been doing more testing and research.My values:

KeepAlive Off <IfModule prefork.c> StartServers 1 MinSpareServers 1 MaxSpareServers 5 ServerLimit 15 MaxClients 15 MaxRequestsPerChild 2000 </IfModule> 

Which seems to be ok and I always have about 500mb of memory to spare(at least when the server is just rebooted).The issue is that the five httpd processes which are always alive keep increasing size so when traffic hits the server and more child processes are created they get the size of the parent httpd process.So if the parent httpd processes are 120mb all the child processes will be 120mb.So it does not matter how small the MaxRequestsPerChild is because a new child process will be created which will take as much memory as the previus one. Any advise?

4
  • 2
    Could it be that your code leaks memory like a sieve? Commented Jun 5, 2013 at 14:57
  • You say '...I will run out of memory'. Does your server actually run out of memory or is this just an extrapolation? Commented Jun 5, 2013 at 14:59
  • I am not sure if I will run out memory because I always reboot it after 5 or 6 days. When I reboot it the total memory in use will be about 1400 but after 4 or 5 days is 1800 so I reboot it.Again I do not see processes stuck or anything.all the memory increase can be seen in those 4 or 5 httpd processes. Commented Jun 5, 2013 at 15:24
  • Not an answer but I'm working this same kinda thing over at stackoverflow: stackoverflow.com/questions/16987900/… Nothing good to report yet... httpd still growing in mem size and using more and more swap Commented Jun 8, 2013 at 13:43

3 Answers 3

3

You don't actually say what web server software you are using. If you are talking about apache though (and it seems likely with a multi-process model) then you should look at the MaxRequestsPerChild directive.

If for example you're running php, ruby or perl apps, that (like most) are not especially careful about memory leaks, then you should probably knock MaxRequestsPerChild down to around 40 or so. What a good value is does vary a bit though. Some application stacks have much more cost associated with restarting processes than others, and some have much more memory leak issues than others. I've set MaxRequestsPerChild anywhere from 5 to 1000 in different circumstances, but It's generally best to start it low and raise it by degrees while it feels safe to do so.

You should expect some increase in memory use after start-up in under normal circumstances, that levels off after a while.

If you did leave your server unattended, and it ran out of memory, then it would likely start using swap, and get horribly slow. Because requests aren't being dealt with quickly, more work would pile up, and it would tend to consume more memory unless limits on numbers of processes prevent that. You want to think a bit about the limits on the numbers of processes, and how much memory you think your server would start using under such circumstances.

You also don't want to have too much swap. If you have a lot of swap, your server will be more or less entirely unresponsive while it slowly consumes its swap memory. Either you'll intervene with a reboot (you're unlikely to get a shell to work), or you'll use all your swap up and the OOM killer will start killing processes. If it comes to this, you'd actually rather the OOM Killer kicked in sooner. Excess swap just makes the downtime longer. The common recommendation to have twice as much swap as RAM is completely inappropriate for most web servers.

Raise your minspareservers and maxspareservers. I'd put maximum up to 15 or so. What's the point of killing them off below that? min should be at least 5.

3
  • Yes I am using apache.These are some of the default values from my httpd.conf: ServerRoot "/etc/httpd" PidFile run/httpd.pid Timeout 60 KeepAlive Off MaxKeepAliveRequests 100 KeepAliveTimeout 15 <IfModule prefork.c> StartServers 1 MinSpareServers 1 MaxSpareServers 5 ServerLimit 50 MaxClients 50 MaxRequestsPerChild 4000 </IfModule> <IfModule worker.c> StartServers 1 MaxClients 50 MinSpareThreads 1 MaxSpareThreads 4 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> Listen 80 Commented Jun 5, 2013 at 18:01
  • I am using Server version: Apache/2.2.15 (Unix) Commented Jun 5, 2013 at 18:05
  • httpd -l Compiled in modules: core.c prefork.c http_core.c mod_so.c Commented Jun 5, 2013 at 18:13
1

You may have a memory leak -- if the workers keep growing every time you run your ab test, you probably have a memory leak. (A little creep in memory usage beyond what you have when you first start the server is normal. Continuous predictable growth isn't).

If your problem is a memory leak it's probably NOT the web server's fault, but rather your code (PHP or whatever scripting language you use, or some library you rely on that is badly written).

The way to fix a memory leak is to analyze your code (how to do that would be a Stack Overflow question), find the memory leak, and fix it (or get the person responsible for the leaky code to fix it).


If the code is opaque (or you just don't have the time) mc0e's answer provides a viable workaround: Make sure you're using an appropriate MPM (on Unix that means prefork or worker) and set the MaxRequestsPerChild directive to a value low enough that your workers are recycled before you run out of RAM.
This directive has performance implications which are detailed in the documentation

1
  • Thanks.I have just done a quick test.I launch a new site where I just put wordpress and a default wordpress theme.The I hit it with: ab -c 1000 -t 100 xxxxxxxx.com and I get an increase of about 40mb everytime I run it.Would this be a good test to prove that the issue is not on my site's php code and the issue lays somewhere else? thanks Commented Jun 5, 2013 at 23:49
0

Finally I found out what the issue was about.I had no leakage on my code or any misconfiguration. The system was behaving as intended.However the issue was related to the Web application firewall which is updating too often and everytime it does it caches the rules again and again and uses lots of httpd memory.

Thanks everybody

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.