Skip to main content
removed bold emphasis that made the answer look gaudy.
Source Link
Avery Payne
  • 14.8k
  • 1
  • 53
  • 91

My question is, when I create the new partition on the new 1TB drive, do I create a 1TB "Raid Auto Detect" partition?

You can, but you're not going to gain anything immediately from that.

Or do I create another 640GB partition and grow it later?

Yes.

RAID-on-partition has its uses, but when you're using the drives in a pseudo-storage-pool setup, you're sometimes better off using 'whole drive' instead of 'partition' RAID members. Designating the whole drive (i.e. /dev/sdc instead of /dev/sdc1) has the advantage of implicitly telling the RAID mechanism that the entire drive is to be used, and therefore, no partition needs to be created/expanded/moved/what-have-you. This turns the hard drive into a 'storage brick' that is more-or-less interchangable, with the caveat that the largest 'chunk size' in your 'set of bricks' will be the smallest drive in the set (i.e. if you have a 40gb, 80gb, and 2x 120gb, the RAID mechanism will use 4x 40gb because it can't obtain more space on the smallest drive). Note that this answer is for Linux software RAID (mdadm) and may or may not apply to other environments.

The downside is that if you need flexibility with your RAID configuration, you will loose that ability, because the entire drive will be claimed. You can however offset that loss through the use of LVM-on-RAID. Another issue with whole-drive RAID is that some recovery processes will require a little more thought, as they often assume the presence of a partition. If you use a tool that expects a partition table, it may balk at the drive.


Unsolicited Advice (and nothing more than that, if it breaks, you keep both pieces, etc.):

Your best bet is to set up your RAID array as you like, using the 'whole drive' technique, but then using LVM to manage your partitions. This gives you a smidgeon of fault-tolerance with RAID, but the flexibility of dynamically sizable partitions. An added bonus: if you use Ext3 (and possibly Ext2 supports this, not sure) you can resize the 'partitions' while they are mounted. Being able to shift the size of mountpoints around while they are 'hot' is a wonderful feature and I recommend considering it.


Additional follow-up:

I received a comment that Ext2 does not support hot resizing. In reality, it does, but only for increases in size. You can read more at this link here. Having done it a few times myself Having done it a few times myself, I can say that it is possible, it does work, and can save you time.

My question is, when I create the new partition on the new 1TB drive, do I create a 1TB "Raid Auto Detect" partition?

You can, but you're not going to gain anything immediately from that.

Or do I create another 640GB partition and grow it later?

Yes.

RAID-on-partition has its uses, but when you're using the drives in a pseudo-storage-pool setup, you're sometimes better off using 'whole drive' instead of 'partition' RAID members. Designating the whole drive (i.e. /dev/sdc instead of /dev/sdc1) has the advantage of implicitly telling the RAID mechanism that the entire drive is to be used, and therefore, no partition needs to be created/expanded/moved/what-have-you. This turns the hard drive into a 'storage brick' that is more-or-less interchangable, with the caveat that the largest 'chunk size' in your 'set of bricks' will be the smallest drive in the set (i.e. if you have a 40gb, 80gb, and 2x 120gb, the RAID mechanism will use 4x 40gb because it can't obtain more space on the smallest drive). Note that this answer is for Linux software RAID (mdadm) and may or may not apply to other environments.

The downside is that if you need flexibility with your RAID configuration, you will loose that ability, because the entire drive will be claimed. You can however offset that loss through the use of LVM-on-RAID. Another issue with whole-drive RAID is that some recovery processes will require a little more thought, as they often assume the presence of a partition. If you use a tool that expects a partition table, it may balk at the drive.


Unsolicited Advice (and nothing more than that, if it breaks, you keep both pieces, etc.):

Your best bet is to set up your RAID array as you like, using the 'whole drive' technique, but then using LVM to manage your partitions. This gives you a smidgeon of fault-tolerance with RAID, but the flexibility of dynamically sizable partitions. An added bonus: if you use Ext3 (and possibly Ext2 supports this, not sure) you can resize the 'partitions' while they are mounted. Being able to shift the size of mountpoints around while they are 'hot' is a wonderful feature and I recommend considering it.


Additional follow-up:

I received a comment that Ext2 does not support hot resizing. In reality, it does, but only for increases in size. You can read more at this link here. Having done it a few times myself, I can say that it is possible, it does work, and can save you time.

My question is, when I create the new partition on the new 1TB drive, do I create a 1TB "Raid Auto Detect" partition?

You can, but you're not going to gain anything immediately from that.

Or do I create another 640GB partition and grow it later?

Yes.

RAID-on-partition has its uses, but when you're using the drives in a pseudo-storage-pool setup, you're sometimes better off using 'whole drive' instead of 'partition' RAID members. Designating the whole drive (i.e. /dev/sdc instead of /dev/sdc1) has the advantage of implicitly telling the RAID mechanism that the entire drive is to be used, and therefore, no partition needs to be created/expanded/moved/what-have-you. This turns the hard drive into a 'storage brick' that is more-or-less interchangable, with the caveat that the largest 'chunk size' in your 'set of bricks' will be the smallest drive in the set (i.e. if you have a 40gb, 80gb, and 2x 120gb, the RAID mechanism will use 4x 40gb because it can't obtain more space on the smallest drive). Note that this answer is for Linux software RAID (mdadm) and may or may not apply to other environments.

The downside is that if you need flexibility with your RAID configuration, you will loose that ability, because the entire drive will be claimed. You can however offset that loss through the use of LVM-on-RAID. Another issue with whole-drive RAID is that some recovery processes will require a little more thought, as they often assume the presence of a partition. If you use a tool that expects a partition table, it may balk at the drive.


Unsolicited Advice (and nothing more than that, if it breaks, you keep both pieces, etc.):

Your best bet is to set up your RAID array as you like, using the 'whole drive' technique, but then using LVM to manage your partitions. This gives you a smidgeon of fault-tolerance with RAID, but the flexibility of dynamically sizable partitions. An added bonus: if you use Ext3 (and possibly Ext2 supports this, not sure) you can resize the 'partitions' while they are mounted. Being able to shift the size of mountpoints around while they are 'hot' is a wonderful feature and I recommend considering it.


Additional follow-up:

I received a comment that Ext2 does not support hot resizing. In reality, it does, but only for increases in size. You can read more at this link here. Having done it a few times myself, I can say that it is possible, it does work, and can save you time.

added 367 characters in body
Source Link
Avery Payne
  • 14.8k
  • 1
  • 53
  • 91

My question is, when I create the new partition on the new 1TB drive, do I create a 1TB "Raid Auto Detect" partition?

You can, but you're not going to gain anything immediately from that.

Or do I create another 640GB partition and grow it later?

Yes.

RAID-on-partition has its uses, but when you're using the drives in a pseudo-storage-pool setup, you're sometimes better off using 'whole drive' instead of 'partition' RAID members. Designating the whole drive (i.e. /dev/sdc instead of /dev/sdc1) has the advantage of implicitly telling the RAID mechanism that the entire drive is to be used, and therefore, no partition needs to be created/expanded/moved/what-have-you. This turns the hard drive into a 'storage brick' that is more-or-less interchangable, with the caveat that the largest 'chunk size' in your 'set of bricks' will be the smallest drive in the set (i.e. if you have a 40gb, 80gb, and 2x 120gb, the RAID mechanism will use 4x 40gb because it can't obtain more space on the smallest drive). Note that this answer is for Linux software RAID (mdadm) and may or may not apply to other environments.

The downside is that if you need flexibility with your RAID configuration, you will loose that ability, because the entire drive will be claimed. You can however offset that loss through the use of LVM-on-RAID. Another issue with whole-drive RAID is that some recovery processes will require a little more thought, as they often assume the presence of a partition. If you use a tool that expects a partition table, it may balk at the drive.


Unsolicited Advice (and nothing more than that, if it breaks, you keep both pieces, etc.):

Your best bet is to set up your RAID array as you like, using the 'whole drive' technique, but then using LVM to manage your partitions. This gives you a smidgeon of fault-tolerance with RAID, but the flexibility of dynamically sizable partitions. An added bonus: if you use Ext3 (and possibly Ext2 supports this, not sure) you can resize the 'partitions' while they are mounted. Being able to shift the size of mountpoints around while they are 'hot' is a wonderful feature and I recommend considering it.


Additional follow-up:

I received a comment that Ext2 does not support hot resizing. In reality, it does, but only for increases in size. You can read more at this link here. Having done it a few times myself, I can say that it is possible, it does work, and can save you time.

My question is, when I create the new partition on the new 1TB drive, do I create a 1TB "Raid Auto Detect" partition?

You can, but you're not going to gain anything immediately from that.

Or do I create another 640GB partition and grow it later?

Yes.

RAID-on-partition has its uses, but when you're using the drives in a pseudo-storage-pool setup, you're sometimes better off using 'whole drive' instead of 'partition' RAID members. Designating the whole drive (i.e. /dev/sdc instead of /dev/sdc1) has the advantage of implicitly telling the RAID mechanism that the entire drive is to be used, and therefore, no partition needs to be created/expanded/moved/what-have-you. This turns the hard drive into a 'storage brick' that is more-or-less interchangable, with the caveat that the largest 'chunk size' in your 'set of bricks' will be the smallest drive in the set (i.e. if you have a 40gb, 80gb, and 2x 120gb, the RAID mechanism will use 4x 40gb because it can't obtain more space on the smallest drive). Note that this answer is for Linux software RAID (mdadm) and may or may not apply to other environments.

The downside is that if you need flexibility with your RAID configuration, you will loose that ability, because the entire drive will be claimed. You can however offset that loss through the use of LVM-on-RAID. Another issue with whole-drive RAID is that some recovery processes will require a little more thought, as they often assume the presence of a partition. If you use a tool that expects a partition table, it may balk at the drive.


Unsolicited Advice (and nothing more than that, if it breaks, you keep both pieces, etc.):

Your best bet is to set up your RAID array as you like, using the 'whole drive' technique, but then using LVM to manage your partitions. This gives you a smidgeon of fault-tolerance with RAID, but the flexibility of dynamically sizable partitions. An added bonus: if you use Ext3 (and possibly Ext2 supports this, not sure) you can resize the 'partitions' while they are mounted. Being able to shift the size of mountpoints around while they are 'hot' is a wonderful feature and I recommend considering it.

My question is, when I create the new partition on the new 1TB drive, do I create a 1TB "Raid Auto Detect" partition?

You can, but you're not going to gain anything immediately from that.

Or do I create another 640GB partition and grow it later?

Yes.

RAID-on-partition has its uses, but when you're using the drives in a pseudo-storage-pool setup, you're sometimes better off using 'whole drive' instead of 'partition' RAID members. Designating the whole drive (i.e. /dev/sdc instead of /dev/sdc1) has the advantage of implicitly telling the RAID mechanism that the entire drive is to be used, and therefore, no partition needs to be created/expanded/moved/what-have-you. This turns the hard drive into a 'storage brick' that is more-or-less interchangable, with the caveat that the largest 'chunk size' in your 'set of bricks' will be the smallest drive in the set (i.e. if you have a 40gb, 80gb, and 2x 120gb, the RAID mechanism will use 4x 40gb because it can't obtain more space on the smallest drive). Note that this answer is for Linux software RAID (mdadm) and may or may not apply to other environments.

The downside is that if you need flexibility with your RAID configuration, you will loose that ability, because the entire drive will be claimed. You can however offset that loss through the use of LVM-on-RAID. Another issue with whole-drive RAID is that some recovery processes will require a little more thought, as they often assume the presence of a partition. If you use a tool that expects a partition table, it may balk at the drive.


Unsolicited Advice (and nothing more than that, if it breaks, you keep both pieces, etc.):

Your best bet is to set up your RAID array as you like, using the 'whole drive' technique, but then using LVM to manage your partitions. This gives you a smidgeon of fault-tolerance with RAID, but the flexibility of dynamically sizable partitions. An added bonus: if you use Ext3 (and possibly Ext2 supports this, not sure) you can resize the 'partitions' while they are mounted. Being able to shift the size of mountpoints around while they are 'hot' is a wonderful feature and I recommend considering it.


Additional follow-up:

I received a comment that Ext2 does not support hot resizing. In reality, it does, but only for increases in size. You can read more at this link here. Having done it a few times myself, I can say that it is possible, it does work, and can save you time.

clarification on RAID size being bound by the smallest member of the set; added 87 characters in body; added 77 characters in body
Source Link
Avery Payne
  • 14.8k
  • 1
  • 53
  • 91

My question is, when I create the new partition on the new 1TB drive, do I create a 1TB "Raid Auto Detect" partition?

You can, but you're not going to gain anything immediately from that.

Or do I create another 640GB partition and grow it later?

Yes.

RAID-on-partition has its uses, but when you're using the drives in a pseudo-storage-pool setup, you're sometimes better off using 'whole drive' instead of 'partition' RAID members. Designating the whole drive (i.e. /dev/sdc instead of /dev/sdc1) has the advantage of implicitly telling the RAID mechanism that the entire drive is to be used, and therefore, no partition needs to be created/expanded/moved/what-have-you. This turns the hard drive into a 'storage brick' that is more-or-less interchangable, with the caveat that the largest 'chunk' of'chunk size' in your 'brick''set of bricks' will be the smallest drive in the set (i.e. if you have a 40gb, 80gb, and 2x 120gb, you'll be able to havethe RAID mechanism will use 4x 40gb RAID autodetect regions, because it can't obtain more space on the smallest drive dictates the largest 'size'). Note that this answer is for Linux software RAID (mdadm) and may or may not apply to other environments.

The downside is that if you need flexibility with your RAID configuration, you will loose that ability, because the entire drive will be claimed. You can however offset that loss through the use of LVM-on-RAID. Another issue with whole-drive RAID is that some recovery processes will require a little more thought, as they often assume the presence of a partition. If you use a tool that expects a partition table, it may balk at the drive.

 

Unsolicited Advice (and nothing more than that, if it breaks, you keep both pieces, etc.):

Your best bet is to set up your RAID array as you like, using the 'whole drive' technique, but then using LVM to manage your partitions. This gives you a smidgeon of fault-tolerance with RAID, but the flexibility of dynamically sizable partitions. An added bonus: if you use Ext3 (and possibly Ext2 supports this, not sure) you can resize the 'partitions' while they are mounted. You read that right, you can resize a filesystem while it's hot, provided that you have adequate space for the shrink/expand that you are attempting Being able to shift the size of mountpoints around while they are 'hot' is a wonderful feature and I recommend considering it.

My question is, when I create the new partition on the new 1TB drive, do I create a 1TB "Raid Auto Detect" partition?

You can, but you're not going to gain anything immediately from that.

Or do I create another 640GB partition and grow it later?

Yes.

RAID-on-partition has its uses, but when you're using the drives in a pseudo-storage-pool setup, you're sometimes better off using 'whole drive' instead of 'partition' RAID members. Designating the whole drive (i.e. /dev/sdc instead of /dev/sdc1) has the advantage of implicitly telling the RAID mechanism that the entire drive is to be used, and therefore, no partition needs to be created/expanded/moved/what-have-you. This turns the hard drive into a 'storage brick' that is more-or-less interchangable, with the caveat that the largest 'chunk' of your 'brick' will be the smallest drive in the set (i.e. if you have a 40gb, 80gb, and 2x 120gb, you'll be able to have 4x 40gb RAID autodetect regions, because the smallest drive dictates the largest 'size'). Note that this answer is for Linux software RAID (mdadm) and may or may not apply to other environments.

The downside is that if you need flexibility with your RAID configuration, you will loose that ability, because the entire drive will be claimed. You can however offset that loss through the use of LVM-on-RAID. Another issue with whole-drive RAID is that some recovery processes will require a little more thought, as they often assume the presence of a partition.

Your best bet is to set up your RAID array as you like, using the 'whole drive' technique, but then using LVM to manage your partitions. This gives you a smidgeon of fault-tolerance with RAID, but the flexibility of dynamically sizable partitions. An added bonus: if you use Ext3 (and possibly Ext2 supports this, not sure) you can resize the 'partitions' while they are mounted. You read that right, you can resize a filesystem while it's hot, provided that you have adequate space for the shrink/expand that you are attempting.

My question is, when I create the new partition on the new 1TB drive, do I create a 1TB "Raid Auto Detect" partition?

You can, but you're not going to gain anything immediately from that.

Or do I create another 640GB partition and grow it later?

Yes.

RAID-on-partition has its uses, but when you're using the drives in a pseudo-storage-pool setup, you're sometimes better off using 'whole drive' instead of 'partition' RAID members. Designating the whole drive (i.e. /dev/sdc instead of /dev/sdc1) has the advantage of implicitly telling the RAID mechanism that the entire drive is to be used, and therefore, no partition needs to be created/expanded/moved/what-have-you. This turns the hard drive into a 'storage brick' that is more-or-less interchangable, with the caveat that the largest 'chunk size' in your 'set of bricks' will be the smallest drive in the set (i.e. if you have a 40gb, 80gb, and 2x 120gb, the RAID mechanism will use 4x 40gb because it can't obtain more space on the smallest drive). Note that this answer is for Linux software RAID (mdadm) and may or may not apply to other environments.

The downside is that if you need flexibility with your RAID configuration, you will loose that ability, because the entire drive will be claimed. You can however offset that loss through the use of LVM-on-RAID. Another issue with whole-drive RAID is that some recovery processes will require a little more thought, as they often assume the presence of a partition. If you use a tool that expects a partition table, it may balk at the drive.

 

Unsolicited Advice (and nothing more than that, if it breaks, you keep both pieces, etc.):

Your best bet is to set up your RAID array as you like, using the 'whole drive' technique, but then using LVM to manage your partitions. This gives you a smidgeon of fault-tolerance with RAID, but the flexibility of dynamically sizable partitions. An added bonus: if you use Ext3 (and possibly Ext2 supports this, not sure) you can resize the 'partitions' while they are mounted. Being able to shift the size of mountpoints around while they are 'hot' is a wonderful feature and I recommend considering it.

Source Link
Avery Payne
  • 14.8k
  • 1
  • 53
  • 91
Loading