0

I am using a NetGear ReadyNAS machine as a NAS for our server. The server is a linux CentOS 6.6. The server is run using Rocks cluster, with all our users' home directories located on the NAS. My understanding is that the home directories are automounted to /home when a user logs on.

Recently we have been facing the infamous, intermittent 'no space left on device' while our drive is nowhere near full. It is not a case of full virtual memory either. Yet, the issue usually gets resolved (temporarily) after deleting or compressing some files. I'd like to check if my inodes are full but for some reason the share where our user directories are located does not report inodes information and shows only 0's. Could someone please explain why this is the case, and how I can check the inodes on this share of my NAS?

The NAS is a nfs file system in a RAID 10 configuration, while my linux cluster uses ext4. Below is output of df -h performed on our master node:

Filesystem Size Used Avail Use% Mounted on /dev/sda2 20G 16G 2.5G 87% / tmpfs 7.9G 12K 7.9G 1% /dev/shm /dev/sda1 190M 103M 78M 57% /boot /dev/sda6 4.7G 12M 4.5G 1% /tmp /dev/sda3 12G 2.0G 9.0G 18% /var tmpfs 3.9G 63M 3.8G 2% /var/lib/ganglia/rrds nas-0-1:/nas/nas-home/user1 15T 8.4T 6.3T 58% /home/user1 nas-0-1:/nas/nas-home/user2 15T 8.4T 6.3T 58% /home/user2 

and df -i:

Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda2 1281120 365426 915694 29% / tmpfs 2057769 4 2057765 1% /dev/shm /dev/sda1 51200 50 51150 1% /boot /dev/sda6 320000 797 319203 1% /tmp /dev/sda3 768544 20175 748369 3% /var tmpfs 2057769 596 2057173 1% /var/lib/ganglia/rrds nas-0-1:/nas/nas-home/user1 0 0 0 - /home/user1 nas-0-1:/nas/nas-home/user2 0 0 0 - /home/user2 

Now if i ssh into the nas itself and repeat, here is the output of df -h performed on the nas:

Filesystem Size Used Avail Use% Mounted on udev 10M 4.0K 10M 1% /dev /dev/md0 4.0G 578M 3.1G 16% / tmpfs 2.0G 0 2.0G 0% /dev/shm tmpfs 2.0G 5.9M 2.0G 1% /run tmpfs 978M 1.5M 977M 1% /run/lock tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup /dev/md127 15T 8.4T 6.3T 58% /nas /dev/md127 15T 8.4T 6.3T 58% /home /dev/md127 15T 8.4T 6.3T 58% /apps /dev/md127 15T 8.4T 6.3T 58% /var/ftp/nas-home 

and df -i performed on the nas:

Filesystem Inodes IUsed IFree IUse% Mounted on udev 499834 446 499388 1% /dev /dev/md0 0 0 0 - / tmpfs 500472 1 500471 1% /dev/shm tmpfs 500472 593 499879 1% /run tmpfs 500472 22 500450 1% /run/lock tmpfs 500472 15 500457 1% /sys/fs/cgroup /dev/md127 0 0 0 - /nas /dev/md127 0 0 0 - /home /dev/md127 0 0 0 - /apps /dev/md127 0 0 0 - /var/ftp/nas-home 

The share on my nas in question is /nas, why is it shown to contain 0 inodes?

Thank you in advance for any help you can offer. This problem has been driving me nuts and hindering our work.

2
  • The NAS might be using a file-system that requires native tools to report on file-system internals like the inode count. Check with mount on the NAS what the file-system is. Commented Nov 16, 2017 at 8:19
  • @HBruijn thank you for the tip. Turns out that my nas in fact uses a btrfs file system. I thought it was nfs because when I run mount on my master node it shows my nas as type nfs. This is probably because we use nfs protocols for accessing the nas through our network. With this knowledge, the answer to my question becomes clear, so I guess I'll post my findings as an answer unless you'd like to post your own answer. Commented Nov 16, 2017 at 21:57

0

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.