1

This is my first post/question here, so please bear with me.

I have an issue with my system where my existing raid5 array won't assemble. This occurred after I inadvertently wiped the partition table of the first disk in the array.

The system was set up a while ago with two raid arrays across 4 identical 4Tb disks - a 2Tb effective/4Tb disk space raid 1 array (md0), intended for home backups, and an ~8Tb effective/12Tb disk space raid 5 array (md1) for media storage.

I was trying to extract all the data from the system to wipe the arrays and start again, so I had removed all data from the Raid 1 array and deleted the array in webmin. I then went into partition manager in webmin intending to change the 1Tb partition on the first drive from part of the now-gone md0 to a usable drive. Unfortunately, doing so wiped the partition table for the whole disk.

So now I am left with a scenario where md1 is showing as inactive in webmin, and I don't know what the best way forward is. Can I "recreate" the raid partitions on sda and have it add back to md1? or can I force md1 to assemble from the remaining 3 drives, and continue the existing data transfer process? the output of

sudo mdadm -D /dev/md1 

is

/dev/md1: Version : 1.2 Raid Level : raid5 Total Devices : 3 Persistence : Superblock is persistent State : inactive Working Devices : 3 Name : miranda:1 (local to host miranda) UUID : 3d6bf0c4:16037750:01681844:95415c3d Events : 2086 Number Major Minor RaidDevice - 8 50 - /dev/sdd2 - 8 34 - /dev/sdc2 - 8 18 - /dev/sdb2 

Using lsblk

gives

sda 8:0 0 3.6T 0 disk sdb 8:16 0 3.6T 0 disk ├─sdb1 8:17 0 931.5G 0 part └─sdb2 8:18 0 2.7T 0 part └─md1 9:1 0 0B 0 md sdc 8:32 0 3.6T 0 disk ├─sdc1 8:33 0 931.5G 0 part └─sdc2 8:34 0 2.7T 0 part └─md1 9:1 0 0B 0 md sdd 8:48 0 3.6T 0 disk ├─sdd1 8:49 0 931.5G 0 part └─sdd2 8:50 0 2.7T 0 part └─md1 9:1 0 0B 0 md 

The data on these drives isn't critical, but it would be preferable to be able to recover/access them than not.

2
  • Hi, can you add results of "cat /proc/mdstat" and also what happen if you run mdam --start /dev/md1 ? You probably just play around with the mdraid, but for real systems it's not a good idea to use one physical disk in two arrays, because it reduces raids redundancy and can be a reason of performance issues if your configuration is based on HDDs. Commented Nov 1, 2022 at 10:45
  • @AleksandrMakhov So cat /proc/mdstat gives the following Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md1 : inactive sdd2[3](S) sdc2[2](S) sdb2[1](S) 8790398976 blocks super 1.2 using mdadm --start /dev/md1 returns "unrecognized option '--start' " Commented Nov 2, 2022 at 2:08

1 Answer 1

0

So in the end I was able to get the array to work by using the command

mdadm --run /dev/md1

Once I confirmed that it was all happy and accessible, I used the partition information from the 3 good disks to "repartition" the disk that I accidentally wiped the table from, then added it back into the array. I must have got the apportionment slightly off, because it had to rebuild, but it did so successfully overnight without any data loss.

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.