2

We rented a server with two NVMe disks in a raid1 configuration with a lvm on top of that.

Is it possible to change the raid level to raid0 without making any changes to the lvm config? We don't need redundancy but might need more disk space soon.

I have no experience with mdadm. I tried running mdadm --grow /dev/md4 -l 0 but got an error: mdadm: failed to remove internal bitmap.

Some additional info:

The OS is ubuntu 18.04
The hosting provider is IONOS
I have access to a debian rescue system but no physical access to the server.

mdadm --detail /dev/md4 ======================= /dev/md4: Version : 1.0 Creation Time : Wed May 12 09:52:01 2021 Raid Level : raid1 Array Size : 898628416 (857.00 GiB 920.20 GB) Used Dev Size : 898628416 (857.00 GiB 920.20 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Wed May 12 10:55:07 2021 State : clean, degraded, recovering Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 Consistency Policy : bitmap Rebuild Status : 7% complete Name : punix:4 UUID : 42d57123:263dd789:ef368ee1:8e9bbe3f Events : 991 Number Major Minor RaidDevice State 0 259 9 0 active sync /dev/nvme0n1p4 2 259 4 1 spare rebuilding /dev/nvme1n1p4 /proc/mdstat: ======= Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md2 : active raid1 nvme0n1p2[0] nvme1n1p2[2] 29293440 blocks super 1.0 [2/1] [U_] resync=DELAYED md4 : active raid1 nvme0n1p4[0] nvme1n1p4[2] 898628416 blocks super 1.0 [2/1] [U_] [>....................] recovery = 2.8% (25617280/898628416) finish=704.2min speed=20658K/sec bitmap: 1/7 pages [4KB], 65536KB chunk unused devices: <none> df -h: ====== Filesystem Size Used Avail Use% Mounted on udev 32G 0 32G 0% /dev tmpfs 6.3G 11M 6.3G 1% /run /dev/md2 28G 823M 27G 3% / /dev/vg00/usr 9.8G 1013M 8.3G 11% /usr tmpfs 32G 0 32G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 32G 0 32G 0% /sys/fs/cgroup /dev/mapper/vg00-home 9.8G 37M 9.3G 1% /home /dev/mapper/vg00-var 9.8G 348M 9.0G 4% /var tmpfs 6.3G 0 6.3G 0% /run/user/0 fdisk -l: ========= Disk /dev/nvme1n1: 894.3 GiB, 960197124096 bytes, 1875385008 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 3FEDFA8D-D63F-42EE-86C9-5E728FA617D2 Device Start End Sectors Size Type /dev/nvme1n1p1 2048 6143 4096 2M BIOS boot /dev/nvme1n1p2 6144 58593279 58587136 28G Linux RAID /dev/nvme1n1p3 58593280 78125055 19531776 9.3G Linux swap /dev/nvme1n1p4 78125056 1875382271 1797257216 857G Linux RAID Disk /dev/md4: 857 GiB, 920195497984 bytes, 1797256832 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/md2: 28 GiB, 29996482560 bytes, 58586880 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/nvme0n1: 894.3 GiB, 960197124096 bytes, 1875385008 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 948B7F9A-0758-4B01-8CD2-BDB08D0BE645 Device Start End Sectors Size Type /dev/nvme0n1p1 2048 6143 4096 2M BIOS boot /dev/nvme0n1p2 6144 58593279 58587136 28G Linux RAID /dev/nvme0n1p3 58593280 78125055 19531776 9.3G Linux swap /dev/nvme0n1p4 78125056 1875382271 1797257216 857G Linux RAID Disk /dev/mapper/vg00-usr: 10 GiB, 10737418240 bytes, 20971520 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/vg00-var: 10 GiB, 10737418240 bytes, 20971520 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/vg00-home: 10 GiB, 10737418240 bytes, 20971520 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes lvm configuration: ================== --- Physical volume --- PV Name /dev/md4 VG Name vg00 PV Size <857.00 GiB / not usable 2.81 MiB Allocatable yes PE Size 4.00 MiB Total PE 219391 Free PE 211711 Allocated PE 7680 PV UUID bdTpM6-vxql-momc-sTZC-0B3R-VFtZ-S72u7V --- Volume group --- VG Name vg00 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 3 Open LV 3 Max PV 0 Cur PV 1 Act PV 1 VG Size <857.00 GiB PE Size 4.00 MiB Total PE 219391 Alloc PE / Size 7680 / 30.00 GiB Free PE / Size 211711 / <827.00 GiB VG UUID HIO5xT-VRw3-BZN7-3h3m-MGqr-UwOS-WxOQTS --- Logical volume --- LV Path /dev/vg00/usr LV Name usr VG Name vg00 LV UUID cv3qcf-8ZB4-JaIp-QYvo-x4ol-veIH-xI37Z6 LV Write Access read/write LV Creation host, time punix, 2021-05-12 09:52:03 +0000 LV Status available # open 1 LV Size 10.00 GiB Current LE 2560 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 --- Logical volume --- LV Path /dev/vg00/var LV Name var VG Name vg00 LV UUID ZtAM8T-MO4F-YrqF-hgUN-ctMC-1RSn-crup3E LV Write Access read/write LV Creation host, time punix, 2021-05-12 09:52:03 +0000 LV Status available # open 1 LV Size 10.00 GiB Current LE 2560 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:1 --- Logical volume --- LV Path /dev/vg00/home LV Name home VG Name vg00 LV UUID AeIwpS-dnX1-6oGP-ieZ2-hmGs-57zd-6DnXRv LV Write Access read/write LV Creation host, time punix, 2021-05-12 09:52:03 +0000 LV Status available # open 1 LV Size 10.00 GiB Current LE 2560 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:2 

Thanks

1
  • 1
    The error of not being able to remove the bitmap is probably temporary because one disk is rebuilding as per your mdstat output. After that recovery is finished you could proceed. Or you can just "fail" and "remove" the nvme1 drive from your arrays and then grow to raid0 and add the drive. Commented May 17, 2021 at 9:25

1 Answer 1

1
+50

This may not be the approach you were originally considering, but you could move LVM data between disks so you end up with both drives as LVM physical volumes in your volume group.

To do that, you would remove one drive from the RAID1 array, run pvcreate on the detached drive to reformat it, then add it your LVM volume with vgextend. This should double your LVM volume group size. Then remove the degraded array from the LVM VG, which should transfer data in a way that is fairly fault tolerant. (See the the "NOTES" section in the pvmove man page for details). Once that degraded array has been removed from your VG, you can stop the array, then add the remaining drive to the LVM group the same way you added the other drive.

I recently migrated LVM hosted data in a similar scenario, but from a RAID10 with two copies of data, to two RAID1 arrays with three copies per array, and with larger disks. So we got the best of both worlds: more data, and more reliability. I don't know what your use case is, but I should mention that I personally wouldn't feel comfortable hosting data without RAID unless it's easy to re-generate from scratch. 2 TB seems like a lot of data to recreate or sync, but if no one would be bothered by extended downtime or the network traffic, it's your call.

2
  • Thanks, that worked flawlessly. I had to use wipefs -a before running pvcreate, which seemd a little brute-force, but worked fine. Commented May 21, 2021 at 14:17
  • The servers basically just serve static files that are easy to sync. Therefore we decided against a raid. For now it has never been a problem and a downtime would be fine as we has multiple servers doing the same job. Commented May 21, 2021 at 14:20

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.