18

I have long running process at Debian. At some point in throw an error:

Too many open files.

Running:

ulimit -a

shows:

open files (-n) 1024

I wish to increase number of open files in 2 times. After execution

ulimit -n 2048

the limit is active until end of my session, which is not applicable for the task.

How can I permanently increase number of open files?

0

7 Answers 7

15

If your process is started via a script, you can place the call to ulimit in the script just prior to executing the daemon.

If you wish to increase the ulimit for your user, or for all users, you can set limits that are applied via pam_limits on login. These are set in /etc/security/limits.conf. In your case, you could do something like:

* hard nofile 2048 

Note that "hard" denotes a hard limit - one that cannot be exceeded, and cannot be altered. A soft limit can be altered by a user (e.g. someone without root capabilities), but not beyond the hard limit.

Read the limits.conf for more information on using pam_limits.

6
  • At limits conf I have 2 lines: * soft nofile 4096 * hard nofile 8192 Which have no effect. Commented Jun 5, 2009 at 10:15
  • And you've logged out and in again since testing these? This will mean logging right out of X / GNOME/ KDE etc, if you're trying this on a local machine Commented Jun 5, 2009 at 10:36
  • Yes. /etc/security/limits.conf doesn't work for me. I'll try second approach. Commented Jun 5, 2009 at 10:53
  • 3
    /etc/security/limits.conf works only for services that use pam and the pam module pam_limits (see /etc/pam.d/ for the PAM config of each service and /etc/pam.d/common-* in particular). It thus concerns all user-sessions created by sshd, gdm, login, etc. It doesn't concern all programs started at boot-time... Commented Jun 6, 2009 at 12:08
  • I did say something to that effect, but thanks for clarifying it. The OP hasn't clarified if it's a service or a process his user is running. Commented Jun 7, 2009 at 4:02
15

There is also a "total max" of open files set in the kernel, you can check the current setting with:

cat /proc/sys/fs/file-max 

And set a new value with:

echo "104854" > /proc/sys/fs/file-max 

If you want to keep the config between reboots add

sys.fs.file-max=104854 

to

/etc/sysctl.conf 

To check current max file usage:

[root@srv-4 proc]# cat /proc/sys/fs/file-nr 3391 969 52427 | | | | | | | | maximum open file descriptors | total free allocated file descriptors total allocated file descriptors (the number of file descriptors allocated since boot) 
2
5

Be aware that if you run your process by start-stop-daemon setting ulimits in /etc/security/limits.conf doesn't work. If you for example want to raise open file limit for tomcat to 20000 you need to add these to lines to /etc/default/tomcat:

ulimit -Hn 32768 ulimit -Sn 32768 

I encountered this problem on debian 6.0.4 For other process the answers given should help.

4

As others have said you can apply specific limits per user or group in /etc/security/limits.conf.

Note: ulimit -n shows the soft limit.

ulimit -H -n 

will show you the hard limit.

This makes ulimit -a and ulimit -n output quite confusing if for example, you were raising the number of files from 1024 to 4096, as you would expect to see the hard limit output, but you're still seeing 1024 which is the soft limit.

Also, remember that these limits are enforced per login, so re-login in a new shell and check your changes , don't expect them to be propagated to existing logins.

1

It depends on how you start your long-running process. If it's started at boot time (via /etc/rcX.d/* scripts) then you must put a ulimit call in your startup script as the default limit is set by the kernel and it's not tunable without recompiling it.

Using /etc/security/limits.conf could work if you use cron to start it for example with a entry like this:

@reboot $HOME/bin/my-program 

That should work because /etc/pam.d/cron enables pam_limits.so.

-1

You can add this in /etc/security/limits.conf

root soft nofile 100000 root hard nofile 100000 

save then reboot.

1
  • 3
    Is there something new here that's not in the accepted answer to this 5-year-old question? Commented Dec 12, 2014 at 11:51
-2

A very nice command is ulimit -n but there is a problem with too many connection and too many open files:

core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 519357 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited 
2
  • I tried to clean up your answer but I'm still unclear what you're trying to say to the original posters question. Can you try and clean this up further? Commented Mar 17, 2013 at 1:33
  • Also this is the output of ulimit -a, not ulimit -n. Commented Mar 8, 2018 at 7:49

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.