3

I've recently set up new Linux based file server. The distribution I'm using is Ubuntu 10.10. I've created two software raid devices as follows:

mc@vmr:~$ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sdf1[1] sdh1[3] sde1[0] sdg1[2] 19530688 blocks [4/4] [UUUU] md1 : active raid10 sdh3[3] sdf3[1] sde3[0] sdg3[2] 1912461184 blocks 64K chunks 2 near-copies [4/4] [UUUU] 

Device /dev/md0 (raid1) is mounted on "/" and /dev/md1 (raid10) is mounted on "/home". Unfortunately the performance of my raid10 is deeply unsatisfying. Here is the performance of each single HDD:

mc@vmr:~$ sudo hdparm -t /dev/sdh3 /dev/sdh3: Timing buffered disk reads: 410 MB in 3.00 seconds = 136.57 MB/sec mc@vmr:~$ sudo hdparm -t /dev/sdf3 /dev/sdf3: Timing buffered disk reads: 402 MB in 3.01 seconds = 133.60 MB/sec mc@vmr:~$ sudo hdparm -t /dev/sde3 /dev/sde3: Timing buffered disk reads: 418 MB in 3.01 seconds = 139.10 MB/sec mc@vmr:~$ sudo hdparm -t /dev/sdg3 /dev/sdg3: Timing buffered disk reads: 406 MB in 3.00 seconds = 135.32 MB/sec 

So naturally I was expecting read performance around 260 MB/sec, but instead I've got this:

mc@vmr:~$ sudo hdparm -t /dev/md1 /dev/md1: Timing buffered disk reads: 172 MB in 3.04 seconds = 56.64 MB/sec 

Firstly I've assumed that hdparm testing method is not to be 100% trusted so I did kind of real-world read test and performance was still not as expected (random.bin is placed on raid10):

mc@vmr:~$ dd if=random.bin of=/dev/null bs=256k 38800+0 records in 38800+0 records out 10171187200 bytes (10 GB) copied, 96.3874 s, 106 MB/s 

Lastly I would like to share that the read performance of raid1 is exactly as expected:

mc@vmr:~$ sudo hdparm -t /dev/md0 /dev/md0: Timing buffered disk reads: 412 MB in 3.01 seconds = 136.91 MB/sec 

Has anyone came across problem like this? Any clues?

1
  • Could you do some filesystem benchmarks on both partitions and post the results? The commands below will run iozone in automatic mode with a test file twice as large as the amount of RAM you have. FILESIZE=$(awk '/MemTotal/ {printf("%dg\n", $2 / 1024 / 1024 * 2)}' /proc/meminfo); iozone -a -n $FILESIZE -f /root/tempfile > /tmp/raid1_benchmark; iozone -a -n $FILESIZE -f /home/tempfile > /tmp/raid10_benchmark; Commented Mar 1, 2011 at 16:43

3 Answers 3

2

64K chunks

— way too small. Almost every I/O op. has noticeable probability to involve 2 disks with such stripe size, which means more wasted I/O. My suggestion is at least 512 KiB and may be 1—2 MiB.

Also, you might find that mine answer useful.

1

I just created again my md1 array :

leo@stellie:~$ cat /proc/mdstat Personalities : [raid1] [raid10] md1 : active raid10 sdc6[0] sdb6[2] sda6[1] 32807040 blocks super 1.2 64K chunks 2 far-copies [3/3] [UUU] md0 : active raid1 sda1[0] sdb1[2] sdc1[1] 248896 blocks [3/3] [UUU] 

Notice that : 1. metadata version was upgraded to 1.2 2. far-copies instead of near-copies

leo@stellie:~$ sudo hdparm -t /dev/md1 /dev/md1: Timing buffered disk reads: 372 MB in 3.02 seconds = 123.29 MB/sec 

I made some more hdparm test while the array was not yet in use and : - 64K chunks and near-copies performed better than before (~70MB/sec) - 512K chunks had lower transfer rate (~50MB/sec) - max read performance has been reached with far-copies and 64K chunks

As you said, I need to perform a test with iozone as well.

0

There are few options

1.There a problem with:

  • hard drive firmware
  • firmware motherboard
  • firmware sata controller
  • bug in ubuntu

2.try tuning a few things

  • hdparm -t --direct /dev/md0
  • blockdev --setra 16384
  • try bigger block with raid10 (not sure if its any good)

anyway hdparm is not really the best benchmark software - try bonni++ ,iozone or dd

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.