1

My ISP has a machine with 6 HDDs (3TB+3TB+3TB+2TB+2TB+1TB) which we want to turn into a FTP/Download server for local users. I am planning to install CentOS 6.5 on one 2TB, rest of all space shall contain big files for download. Considering that we do not have even sized or same volume HDDs right now and HDD size may be upgraded further, what should be the best partitioning scheme for the system (LVM/Soft RAID/Combination)? And how to achieve that best arrangement?

Please focus on these points:

a. Mirroring or data retrieval is not important here

b. Performance and harddisk speed is major concern

c. The 1TB HDD may be upgraded in next few months

d. Atleast 400 LAN users download contents simultaneously from such servers

e. No hardware RAID controller available

Thanks in advance for your cooperation!

4
  • 1
    Also, how much data to you actually need to/expect to store? That may make a big difference. If you only need to store 2TB you could do RAID10 across 6 disks. But that won't work if you need to store 10TB. Commented Jan 25, 2014 at 1:15
  • 1
    "Mirroring or data retrieval is not important here." Can you be clearer about this expectation? RAID is not to preserve data integrity; that's what backups are for. The purpose of RAID is to preserve uptime. Will your 400 LAN users be up a creek if this thing goes down and needs time to be rebuilt? Commented Jan 25, 2014 at 1:17
  • Grant: I expect to store maximum possible data on the HDDs, maybe 14TB now and 16TB after the upgrade. @Skyhawk: By that I mean if any HDD crashes somehow we do not need that data rather we would like to keep the machine running with other contents on the other disks. It's a free service for the users and they should accept downtime. Commented Jan 25, 2014 at 9:49
  • @bonytasnim If you don't want the machine to go down when a disk fails, why are you proposing to use a single unmirrored disk for the operating system? You need to think this through. Commented Jan 26, 2014 at 0:26

4 Answers 4

0

For best speed and a possibility to extend this I'd use 3TB+3TB RAID0, 2TB+2TB RAID0 connected as JBOD using md (it would create an 10TB drive). I'd wait for 1TP upgrade to 3TB and then I'd add 3TB+3TB RAID0 to JBOD (it would then be 16TB). It would all be twice as fast as a single drive.

If you'd first wait for 1TB->2TB upgrade, then another setup would be faster: RAID0(JBOD(3+2)+JBOD(3+2)+JBOD(3+2)). This would be 15TB, 3 times faster than a single drive.

But with no redundancy it would all die with no chance for recovery with a first dying drive. So, a reasonable setup would really be:

JBOD(RAID5(3TB+3TB+3TB),degradedRAID5(2TB+2TB)), which, after upgrade 1TB=>2TB would be JBOD(RAID5(3TB+3TB+3TB),RAID5(2TB+2TB+2TB)). This would get you 10TB with redundancy, with fast reads (3*speed of single drive) and slow writes (slightly slower than single drive).

2
  • Thanks Tometzky, for your answer; its helpful and scaled to my actual requirements. Problem is I cannot wait for 1TB -> 3TB nor 1TB-> 2TB and it is obvious that more space will be added. Is there any method to do RAID on 2 upgraded HDDs later on that system? Commented Jan 25, 2014 at 10:37
  • First option will allow you to use JBOD(RAID0(3+3)+RAID0(2+2)). JBOD will allow you to easily add and remove drives at the end of array. You'll just need to expand a filesystem after adding a new drive and shrink it before removing. Commented Jan 25, 2014 at 13:39
1

My vote is for LVM. LVM allows you to stripe, resize and add and remove disks fairly easily on the fly. If your system has hot swap bays these upgrades can be done with zero downtime.

1

LVM adds a lot of flexibility at no performance cost, using it is a no-brainer.

While you don't want to spend space on backups (which would take up half the space), with 6 disks you do have a high risk of disk failures so you need some form of redundancy (for uptime, not point in time recovery). You can get that with raid-like technology, either Linux's raid (which is integrated with LVM) or Btrfs. The preferred raid level would depend on how much read performance, write performance, and uptime you prefer. An interesting property of Btrfs is that you can use different raid levels for data and metadata. With higher redundancy for metadata, some failures can leave big holes in large files but still keep filesystem integrity and leave a proportion of small files unaffected.

-1

Considering the requirements you listed, I would use the following setup:

Array 1 - RAID0 (3x3TB disks)

This would yield 9TB of storage. Keep in mind that, if a single disk fails in this array, your data is toast. But, you did point out that you don't care about your data being retrievable but performance is important, so this throws all caution to the wind and gives you the best performance with the least amount of protection.

Array 2 - RAID1 (2x2TB disks)

I would use this as your backup storage along with OS installation. You do intend on taking backups, right?

A couple of things to note:

  • Use the CentOS partitioning manager to configure the RAID configurations prior to installing the OS.

  • Once the OS is installed, you can then use to manage things like snapshots, filesystem growth, and other tasks. Changes to your RAID configurations would be handled with

  • This is a risky configuration. I would be very surprised if an ISP is not willing to release money to buy appropriate hardware required to set up a file server. You should really have identical disks, an appropriate storage controller, and some type of DAS enclosure.

2
  • Thanks for your support. Well, this is just a free service for local users so, the ISP is not considering enterprise costs. What about the the other disk? Add to LVM as single disk? Another big question is what if I do want to install with 2 2TB HDDs (maybe RAID1 as per your suggestion) and add the other 3TBs later? Actually those disks have data now in ntfs format which may take some time to freed up. Is there any way to use those with existing data, by the way? Commented Jan 25, 2014 at 11:03
  • I would use the 1TB disk for the OS installation. Adding disks to RAID arrays is pretty straightforward - it can just take a long time depending on the amount of storage in use. Installing the OS on a standard physical drive saves the headache of having to worry about the bootloader being present on new drives added to the configuration. Commented Feb 15, 2014 at 6:51

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.