There isn't a standard *nix command for this, but CephFS supports "recursive statistics" to expose that information. This is harder to find in the documentation than I thought it was, but here's a blog post about viewing them in CephFS' "virtual xattrs": https://blog.widodh.nl/2015/04/playing-with-cephfs-recursive-statistics. The "ceph.dir.rbytes" are the sum total of all file sizes underneath the directory in the hierarchy; similarly there are "rsubdirs" and "rfiles" (which sum to "rentries").
You can also set a mount option (userspace kernel) to make the directory size into the rbytes (ie, when you "ls" it, which will normally report 512 bytes or 4k). Doing that causes trouble with some tools, though, as they don't expect directory sizes to change like that or may inspect them to try and identify the local block size.
There are a few caveats to using rstats for precise information:
- File sizes account only for the specified size of a file, not the amount of space actually allocated. If you write 1 byte in a sparse file at offset 1GB, it will report 1GB.
- Updating the statistics require taking locks on inodes and directories, which can be intrusive to client IO in some cases. For this reason, the information is propagated up the tree (from file to directory, to parent directory, etc) lazily, as those locks are mutated for other reasons. It won't be an hour out of date, but it can definitely be ten seconds old.
ceph fs statusor the dashboard information? Maybe that suffices your demands.ceph fs statusthrows an error, here the last part:File "/usr/share/ceph/mgr/status/module.py", line 234, in handle_fs_status assert metadata AssertionError. The dashboard/GUI -> File Systems shows the directory tree, but not used space per dir.ceph fs statusdoes not show how much space a directory in the fs uses., instead, the command shows the full space used by the fs