You’re seeing perfectly normal aspects of shared storage access without using a filesystem designed for that usage (currently on mainline Linux, the options for that are GFS2 and OCFS2, neither of which is great).
To answer your title question, any non-clustered filesystem on Linux caches directory entries and inodes (and, notably, also the filesystem superblock), because the VFS layer itself does that. This really has nothing to do with XFS itself here. XFS may be doing additional caching, but what you demonstrated is behavior you would also see with ext4, BTRFS, F2FS, and essentially any other Linux filesystem except for GFS2 and OCFS2.
Expanding on this a bit more, essentially any filesystem not designed for shared storage access assumes it has exclusive access to the underlying storage device when mounted. This is really important for performance reasons, because it allows things that are not expected to change to just be cached, which eliminates a lot of unnecessary storage accesses.
This, in turn, leads to coherency issues in shared storage access situations like what you are doing. What you saw is actually a best case result, the worst case is that one or more of the hosts crashes due to some bug in the filesystem driver that results from not accounting for some state of the filesystem in persistent storage resulting from seeing a torn write.
If you need multiple systems to access the same storage, you need to instead use one of:
- A clustered filesystem like GFS2 or OCFS2.
- A network filesystem like NFS or SMB3, backed by non-shared storage.
- A distributed storage system like Ceph.