0

I'm trying to figure out how to add RAID 1 to my CentOS NAS server. Today I have 2x3TB disks in a non-RAID configuration. I'm using LVM to manage a few logical volumes to partition my data. I'm not using close to the full 6TB capacity, but I want the room to expand in the future. I'm also backing up regularly off-site with CrashPlan, but I want to ensure that a disk failure won't mean days of downtime to restore data.

So my plan is to buy 2 more 3TB disks to setup RAID 1. I want to make it so the new disks and paired with old ones, since the old ones are now a couple years old and more likely to fail.

Today I'm using roughly 1.6 TB of the 6TB capacity, so I think I can do some moving around and minimize the risk of losing the data. Can anyone point me to a guide or help make sure these steps will work? I know that there is some risk that something will go wrong so I'll have backups available, but I want to make sure these steps should work with low risk to save myself some time.

  1. First I'll consolidate the 6TB LVM VG to one PV. So if /dev/sdb1 and /dev/sdc1 are my old drives, I'll shrink some of my LV's (resize2fs, lvresize), move everything onto sdb1 (pvmove), and remove sdc1 from the VG (vgreduce).
  2. Next I'll add a new drive and create a RAID 1 in the BIOS settings for sdc1 and the new drive (call it sdd1). Let's call this rda1 (I don't know what naming is used but just to be clear in my steps).
  3. Add rda1 to my VG (vgextend)
  4. Move all LV's to rda1 (pvmove)
  5. Remove sdb1 from the VG (vgreduce)
  6. Add the other new drive and create a RAID 1 in BIOS for sdb1 and new drive 2 (sde1). Let's call this rdb1.
  7. Add rdb1 to my VG (vgextend)

Finally, I'd really like to redistribute the data across both RAID's so that it's not all just sitting on the one. Does it make sense for me to just manually move some of the LV's to the other RAID disk? Can I just add striping somehow and let the data be evenly distributed, or would I have to re-create the LV's to do that?

Here is a quick overview of my current setup (I also have an SSD with the OS install and LVM, but I'm just showing the data drives):

$ sudo pvdisplay --- Physical volume --- PV Name /dev/sdb1 VG Name vg_media PV Size 2.73 TiB / not usable 19.00 MiB Allocatable yes PE Size 32.00 MiB Total PE 89424 Free PE 19792 Allocated PE 69632 PV UUID D0Z3Fn-40Yr-akkx-TsLH-n5iM-LQNc-vdLbMf --- Physical volume --- PV Name /dev/sdc1 VG Name vg_media PV Size 2.73 TiB / not usable 19.00 MiB Allocatable yes PE Size 32.00 MiB Total PE 89424 Free PE 40272 Allocated PE 49152 PV UUID 4A1tD5-Rj2I-IdZX-2FPS-4KmS-WnjT-TcAGPf $ sudo vgdisplay --- Volume group --- VG Name vg_media System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 7 VG Access read/write VG Status resizable MAX LV 0 Cur LV 4 Open LV 4 Max PV 0 Cur PV 2 Act PV 2 VG Size 5.46 TiB PE Size 32.00 MiB Total PE 178848 Alloc PE / Size 118784 / 3.62 TiB Free PE / Size 60064 / 1.83 TiB VG UUID wneSMl-nllf-9yaO-GGv2-iDGv-n4vK-mVfGjk $ sudo lvdisplay --- Logical volume --- LV Path /dev/vg_media/library LV Name library VG Name vg_media LV UUID AOs1yk-sVQE-f6sI-PstX-txtm-mu2d-mgJj4W LV Write Access read/write LV Creation host, time srv.mattval.us.to, 2013-05-13 02:37:31 -0700 LV Status available # open 1 LV Size 1.00 TiB Current LE 32768 Segments 2 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:2 --- Logical volume --- LV Path /dev/vg_media/photos LV Name photos VG Name vg_media LV UUID 2DWA1Q-MYTH-1bqq-QgW3-7LiJ-3jNe-v9WXlK LV Write Access read/write LV Creation host, time srv.mattval.us.to, 2013-05-13 02:37:48 -0700 LV Status available # open 1 LV Size 1.00 TiB Current LE 32768 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:3 --- Logical volume --- LV Path /dev/vg_media/projects LV Name projects VG Name vg_media LV UUID 027kQC-dSSJ-Bo40-Xmpa-8ELo-hbGD-jZITBJ LV Write Access read/write LV Creation host, time srv.mattval.us.to, 2013-05-13 02:38:01 -0700 LV Status available # open 1 LV Size 1.50 TiB Current LE 49152 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:4 --- Logical volume --- LV Path /dev/vg_media/docs LV Name docs VG Name vg_media LV UUID El10u0-yYeW-XekC-TP7t-xF9t-qLgz-aFU8AQ LV Write Access read/write LV Creation host, time srv.mattval.us.to, 2013-05-13 02:38:15 -0700 LV Status available # open 1 LV Size 128.00 GiB Current LE 4096 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:5 

Actual used sizes are docs=100GB, library=500GB, photos=350GB, projects=620GB

2 Answers 2

0

Not HW but one simple option to consider is converting logical volumes to RAID1 using lvconvert --type raid1 -m 1 vg/lv optionally specifying which PV should be used, so you correctly couple new drives with the old ones. This is using the same kernel driver as mdadm.

PROS:

  • more flexibility allowing linear/striped LVs for throw away/bulk data (like caches or downloaded ISO images), and RAID1 and/or RAID5 for other as you wish
  • will work elsewhere should your RAID controller fail. By elsewhere I mean if you plug disks in elsewhere they will be recognized by any recent Linux distribution. It will definitely not work under Windows nor *BSD, which is not a problem for your NAS.

CONS:

  • slightly larger load on the server (I have no performance numbers.)

LVM vs. mdadm:

PRO mdadm:

  • mdadm is more widely used, tested by generations of admins, many tutorials...
  • LVM is missing some mdadm features (reshaping), which you do not need (now). This is a work-in-progress.

PRO lvm:

  • LVM is more flexible, which you may not need (now)
  • migration to LVM RAID is so easy

Performance:

LVM vs mdadm RAID:

  • the performance should be the same but...
  • there is a known bug where some operations on LVM RAID are significantly slower than mdadm's: Bug 1332221 - Bad performance in fsyncs for lvm raid1

Software RAID vs. FakeRAID

  • No idea, but it depends a lot on the RAID controller.

Resolution:

If you want proven solution mdadm raid and LVM on top of that would be a good choice.

If you want hassle free migration and flexibility, LVM RAID is the way to go.

3
  • Interesting. I was thinking HW RAID would just be most efficient but I hadn't thought about the RAID controller being a point of failure that could leave me with disks that don't work properly as a RAID if I wanted to just pull them into a new machine. Follow up question, is any configuration stored in the OS or is it all on disk? If I blow away the OS install and install another Linux OS (upgrade, different distro, etc.) will it still read the LVM partitions and the software RAID and work as-is when I mount the disks? Commented Jun 21, 2016 at 20:56
  • Also, how much of an increase in load is it? This is just a personal server to share/backup files, but I don't want to slow down file access if it makes a big difference. Commented Jun 21, 2016 at 20:57
  • Actually I'm reading more about "fake RAID" that my motherboard supports wiki.archlinux.org/index.php/Installing_with_Fake_RAID and I'm starting to think I might as well just go with software RAID, rather than relying on that. Commented Jun 22, 2016 at 4:57
1

Serverfault is for professional sysadmins, who would inherently backup their entire dataset before doing something like this. At which point - if you have the time window to do it - many would just wipe, reconfigure/reformat and restore. This way you have a known stable setup.

4
  • "Serverfault is for professional sysadmins, who would inherently backup their entire dataset" I guess CERN admins and other such home users with their "petty" bytes of data should better ask at serverfault. ;-) Commented Jun 23, 2016 at 9:02
  • Yes, they do, we've had a few from there come here. Commented Jun 23, 2016 at 14:36
  • 1
    I actually meant superuser, which is for home users who do not backup and restore everything. I wanted to say, sometimes it can be rather impractical to do so. Commented Jun 24, 2016 at 14:50
  • 1
    Professional sysadmins backup their data as a matter of course, there's no extra effort and no impracticalities for them. Commented Jun 24, 2016 at 16:37

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.