2

i'm running into problem here with my PHP sessions and i have no clue how to fix this.

>> df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/vglocal20120426-root00 36274176 32885458 3388718 91% / tmpfs 6176642 1 6176641 1% /dev/shm /dev/sda1 64000 47 63953 1% /boot /dev/mapper/vglocal20120426-tmp00 131072 1703 129369 2% /tmp 

and

>> df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vglocal20120426-root00 545G 248G 270G 48% / tmpfs 24G 0 24G 0% /dev/shm /dev/sda1 243M 31M 199M 14% /boot /dev/mapper/vglocal20120426-tmp00 2.0G 802M 1.1G 42% /tmp 

and i have this crazy stats

drwx-wx-wt 2 root root 1016389632 Jul 9 08:13 session 

iNodes is already 91% and keeps growing each second. The problem is i dont have huge traffic (based on real time analytics) to my website. I'm not sure whats going on here. How to traceback the problem and prevent this from happening again.

We turned off the PHP garbage collector and using cronjob instead every 8 hours to delete old sessions.

Right now the tech support runs a script to delete the session files, its been running forever so its like never ending process.

Appreciate if anyone can help me here. Thanks

2
  • If you have turned off the garbage collector, you should take a look at your cron job. Obviously it doesn't work for some reason. Commented Jul 9, 2012 at 13:27
  • Thats what i thought too. Commented Jul 9, 2012 at 13:50

1 Answer 1

2

Well, yes, you've gotten yourself into a right mess. The problem is with that many files in the directory, it could take literally weeks to delete them all. You don't want a cronjob (yet) -- they're probably just piling up on each other and making the problem worse at present. You also want to be careful about how exactly you do the deletion -- you can't have anything that attempts to glob or otherwise enumerate all the files, because that'll take a long time and take a lot of memory before you actually delete anything; instead, you want a script that'll readdir and delete as they come (I suspect, although I'm not sure, that find -delete might do this; when I had to delete a few million files I used a little ruby script).

Once you've got the problem back under control (in a few weeks), then you can run a cronjob every hour to nuke anything older than a few days/weeks/whatever. My guess is you've got years worth of session files in there. Damned if I know how, either -- in my experience, PHP's not bad about keeping that sort of thing under control.

7
  • ls | xargs rm works pretty well for deleting files in such a situation. Commented Jul 9, 2012 at 13:24
  • Thanks womble, however couple session files created every second and i dont know how to traceback the problem. I have an idea though to create new folder for PHP sessions path and delete this one but i'm not sure. Commented Jul 9, 2012 at 13:27
  • @Oliver: No it doesn't, because ls does a lot more than just a simple readdir. For directories into the millions of files, every syscall is slow, and so any extra calls made on each file makes things reeeeeeeally slow. Commented Jul 9, 2012 at 13:30
  • 1
    @fahmi: A couple of sessions a second isn't necessarily much -- every hit from a search engine spider is likely creating a new session. I'd recommend rationalising your code to not create a session for every single page view. Commented Jul 9, 2012 at 13:31
  • @womble thanks, I'll compare different methods next time I run into such a problem. Commented Jul 9, 2012 at 13:31

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.