Benchmarking Linux SATA Controllers

My project for the weekend was to build a Linux storage server for my network.

I picked up a pair of ST3500631NS (server-grade 500GB SATA) disk drives from Newegg at a very good price ($119 apiece). I want to add them to chinacat, my main workstation, preferably in a redundant mirror.

Unfortunately, when I opened the computer case I discovered my system board only had two SATA ports. I needed one port for my system disk, two more for the new drives. For some reason I had thought my system had four ports, but it didn't. I was coming up short.

So it was time for plan B. I decided to try adding more SATA ports with an add-in controller card.

I picked up an inexpensive SATA controller at Fry's for $40, a SIIG SC-SAT212-S4. This is a PCI card with two internal SATA ports.

I debated between this card and a more expensive alternative. I was concerned that the bottom-of-the-line card would result in lower disk performance and higher CPU overhead. I decided to give this card a shot. I could run some benchmark tests and return it if it proved unsuitable.

Installation went without a hitch. chinacat (Ubuntu 7.10 Linux) automatically recognized the disks connected to the add-on controller. The only possible glitch is that those disks were scanned before the ones on the built-in controller, so my system disk moved from /dev/sda to /dev/sdc. This would have been an annoyance if my /etc/fstab refered to filesystems by device name (such as "/dev/sda1"). Fortunately, my /etc/fstab uses a UUID or /dev/mapper name to identify filesystems.

I used the bonnie++ disk benchmark program to compare the performance of this add-on card to the built-in SATA ports on my ASUS A8N-VM system board.

I ran the test by creating a 10GB ext3 filesystem and running in single-user mode. Here are the results:

Version  1.03       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
               Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
ASUS built-in    6G 41739  97 63675  27 34572  10 47792  96 79044   9 220.0   0
SIIG add-on      6G 41310  96 64334  23 34373  10 46819  96 79255   7 235.4   1

I was mostly concerned about block I/O performance. Those results are highlighted above.

The results were surprising—in a good way. The inexpensive add-on board performed slightly better than the built-in system ports. Throughput increased and CPU decreased just a bit, both for input and output. I was expecting to be penalized for connecting disks off the PCI bus, and I wasn't.

This means the port shortage problem was averted, and I was good to go to build my RAID. Tomorrow, I'll talk about my results from that.

(This article is part one of a three part series. Part two continues here.)