I have a four-disk Linux software RAID, configured as RAID5:
$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 0 3.6T 0 disk └─sda6 8:6 0 3.6T 0 part └─md10 9:10 0 10.8T 0 raid5 /mnt/volume0 sdb 8:16 0 3.6T 0 disk └─sdb6 8:22 0 3.6T 0 part └─md10 9:10 0 10.8T 0 raid5 /mnt/volume0 sdc 8:32 0 3.6T 0 disk └─sdc6 8:38 0 3.6T 0 part └─md10 9:10 0 10.8T 0 raid5 /mnt/volume0 sdd 8:48 0 3.6T 0 disk └─sdd6 8:54 0 3.6T 0 part └─md10 9:10 0 10.8T 0 raid5 /mnt/volume0 /dev/md10 is formatted XFS. I previously had other partitions/junk on these disks, which I have since deleted, hence the sdX6 partition names.
$ /sbin/cfdisk -r Disk: /dev/sda Size: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors Label: gpt, identifier: 7C1B7075-38E8-40E0-A382-D960F5AA85A8 Device Start End Sectors Size Type >> Free space 2048 50008063 50006016 23.8G /dev/sda6 50008064 7796883455 7746875392 3.6T Microsoft basic data Free space 7796883456 7814037134 17153679 8.2G The other disks are identical, with a small amount of free space before and after the single partition with data. If possible, I would like to resize all four partitions so each RAID partition uses the entire disk.
I do have the data backed up, but it would be nice I could re-size it in place and keep the existing array and filesystem intact. Before I go experimenting with fdisk and mdadm, it would be nice to have confirmation that what I'm trying to do is possible.
dd, repeated for each drive. But in the end, you have 3.6 TB per drive now, and after adding 0.03 TB, you will still have 3.6 TB per drive. Anything you do here will get you zero additional storage space in exchange for reduced lifespan of your drives. Take it as a lesson learned!