1

We have an HP ML 350 G6 with an HP smartArray 410i controller. Running Ubuntu x64.

We previously had 8 146G drives configured as a 820GB raid 50 logical drive.

Now I have replaced 4 146GB drives with 900GB drives and want to resize the logical drive.

But for some reason that doesn't work. hpacucli gives the following error:

ctrl slot=0 ld 1 modify size=max Error: This operation is not supported with the current configuration. Use the "show" command on devices to show additional details about the configuration.

I assumed that one could have two parity groups with different size, because they are independent. What is the best way to migrate to utilize the full disk size ? How can I reconfigure without losing data ?

My configuration is as follows:

Smart Array P410i in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Serial Number: 5001438013590600 Cache Serial Number: PAAVPID11071DTD RAID 6 (ADG) Status: Disabled Controller Status: OK Chassis Slot: Hardware Revision: Rev C Firmware Version: 3.66 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 3 secs Surface Scan Mode: Idle Queue Depth: Automatic Monitor and Performance Delay: 60 min Elevator Sort: Enabled Degraded Performance Optimization: Disabled Inconsistency Repair Policy: Disabled Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 15 secs Cache Board Present: True Cache Status: OK Accelerator Ratio: 25% Read / 75% Write Drive Write Cache: Disabled Total Cache Size: 512 MB No-Battery Write Cache: Disabled Cache Backup Power Source: Batteries Battery/Capacitor Count: 1 Battery/Capacitor Status: OK SATA NCQ Supported: True Array: A Interface Type: SAS Unused Space: 0 MB Status: OK Logical Drive: 1 Size: 820.2 GB Fault Tolerance: RAID 50 Number of Parity Groups: 2 Heads: 255 Sectors Per Track: 32 Cylinders: 65535 Strip Size: 256 KB Status: OK Array Accelerator: Enabled Parity Initialization Status: Initialization Completed Unique Identifier: 600508B1001CBB49A596781F682CFA50 Disk Name: /dev/cciss/c0d0 Mount Points: /boot 243 MB OS Status: LOCKED Logical Drive Label: AF2B6C6D5001438013590600F4AF Parity Group 0: physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 900.1 GB, OK) physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SAS, 900.1 GB, OK) physicaldrive 2I:1:5 (port 2I:box 1:bay 5, SAS, 900.1 GB, OK) physicaldrive 2I:1:7 (port 2I:box 1:bay 7, SAS, 900.1 GB, OK) Parity Group 1: physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 146 GB, OK) physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SAS, 146 GB, OK) physicaldrive 2I:1:6 (port 2I:box 1:bay 6, SAS, 146 GB, OK) physicaldrive 2I:1:8 (port 2I:box 1:bay 8, SAS, 146 GB, OK) 

Thx !

1 Answer 1

3

Your best approach here is to use similarly-sized disks. In this case, you should have ALL 900GB disks.

Why did you only upgrade the size of half the disks, though?

They are all still in a single logical drive, so your 900GB disks are essentially being treated as 146GB drives. That space can't be reclaimed in the current situation.

That will be the case until you replace the remaining four disks. At that point, the Unused Space: 0 MB will reflect a much greater number. You'll have the option of expanding the existing logical drive, or carving additional logical drives out of the unused space.

pro-tip: You can have logical drives of differing RAID levels on a single group of disks!! Each logical drive is presented to the OS as a distinct block device.

For instance, the following array of 8 disks is carved into several volumes of RAID10 and RAID5:

Smart Array P400 in Slot 8 (sn: P61630G9SVN702) array A (SAS, Unused Space: 404824 MB) logicaldrive 1 (72.0 GB, RAID 1+0, OK) logicaldrive 2 (120.0 GB, RAID 1+0, OK) logicaldrive 3 (100.0 GB, RAID 5, OK) logicaldrive 4 (100.0 GB, RAID 1+0, OK) physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 146 GB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 146 GB, OK) physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SAS, 146 GB, OK) physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SAS, 146 GB, OK) physicaldrive 2I:1:5 (port 2I:box 1:bay 5, SAS, 146 GB, OK) physicaldrive 2I:1:6 (port 2I:box 1:bay 6, SAS, 146 GB, OK) physicaldrive 2I:1:7 (port 2I:box 1:bay 7, SAS, 146 GB, OK) physicaldrive 2I:1:8 (port 2I:box 1:bay 8, SAS, 146 GB, OK) 
1
  • Thx ! We didnt upgrade all drives because we/I assumed we had two raid 5 arrays in a stripe, which are separate entities. And we're a small company, and the drives are kind of expensive ;-) Thanks for your answer. Commented Jan 28, 2013 at 14:55

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.