0

I'm running Windows in VM on Linux. I have a backing (base) file and upper layer (qcow2). Since last upper layer creation it grew to a large size. I've mounted the disk with guestmount and summed up sizes of files changed during that uptime - that came to about 4 times less than the size of the upper layer file. Why has the layer file got so large?

P.S. commands for reference:

guestmount -a /path/Win_upper.qcow2 -m /dev/sda1 --ro /1 find /1 -mtime -10 -exec stat --format='%n %s' "{}" \; >> a.txt cat a.txt | awk '{print $NF}' | while read l; do a=$(($a+$l));echo $l $a; done 
3
  • 1
    Do you have many small changed files? If so each one either rounds up to a data cluster plus a directory block, or with NTFS possibly one or maybe two MFT block(s). Also (1) why writing c.txt but reading b.txt? (2) (GNU) find ... -printf '%p %s\n' is more efficient than -exec stat ... (3) (any) awk can do arithmetic Commented Jun 13 at 1:37
  • @dave_thompson_085, with web search help and subsequest fsutil fsinfo ntfsinfo c: I've found out cluster size is 4096 bytes. Dividing total of changed files by their number I've got ~600 KB (if VM image size/qty of files = ~2MB), to me it seems you hypothesis is not supported here (thought I have not checked for directory blocks size, could they be many times more?). Commented Jun 13 at 14:52
  • @dave_thompson_085 1) You've noted c vs b files, you are attentive - the reason is that I've done several runs with different -mtime and copied some lines to the site. 2) to improve efficiency of code is a commendable aim, as for this specific task, I've done with what I knew how to write the code quicker and find within guestmounted location itself was taking subjectively ~100 times longer than subsequent processing. Commented Jun 13 at 14:53

0

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.