Discussion:
8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance
(too old to reply)
Dan Naumov
2010-01-24 16:36:22 UTC
Permalink
Note: Since my issue is slow performance right off the bat and not
performance degradation over time, I decided to start a separate
discussion. After installing a fresh pure ZFS 8.0 system and building
all my ports, I decided to do some benchmarking. At this point, about
a dozen of ports has been built installed and the system has been up
for about 11 hours, No heavy background services have been running,
only SSHD and NTPD:

==================================================================================
bonnie -s 8192:

-------Sequential Output-------- ---Sequential Input-- --Random--
-Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU
8192 23821 61.7 22311 19.2 13928 13.7 25029 49.6 44806 17.2 135.0 3.1

During the process, TOP looks like this:

last pid: 83554; load averages: 0.31, 0.31, 0.37 up 0+10:59:01 17:24:19
33 processes: 2 running, 31 sleeping
CPU: 0.1% user, 0.0% nice, 14.1% system, 0.7% interrupt, 85.2% idle
Mem: 45M Active, 4188K Inact, 568M Wired, 144K Cache, 1345M Free
Swap: 3072M Total, 3072M Free

Oh wow, that looks low, alright, lets run it again, just to be sure:

-------Sequential Output-------- ---Sequential Input-- --Random--
-Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU
8192 18235 46.7 23137 19.9 13927 13.6 24818 49.3 44919 17.3 134.3 2.1

OK, let's reboot the machine and see what kind of numbers we get on a
fresh boot:

===============================================================

-------Sequential Output-------- ---Sequential Input-- --Random--
-Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU
8192 21041 53.5 22644 19.4 13724 12.8 25321 48.5 43110 14.0 143.2 3.3

Nope, no help from the reboot, still very low speed. Here is my pool:

===============================================================

zpool status
pool: tank
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror ONLINE 0 0 0
ad10s1a ONLINE 0 0 0
ad8s1a ONLINE 0 0 0

===============================================================

diskinfo -c -t /dev/ad10
/dev/ad10
512 # sectorsize
2000398934016 # mediasize in bytes (1.8T)
3907029168 # mediasize in sectors
3876021 # Cylinders according to firmware.
16 # Heads according to firmware.
63 # Sectors according to firmware.
WD-WCAVY0301430 # Disk ident.

I/O command overhead:
time to read 10MB block 0.164315 sec = 0.008 msec/sector
time to read 20480 sectors 3.030396 sec = 0.148 msec/sector
calculated command overhead = 0.140 msec/sector

Seek times:
Full stroke: 250 iter in 7.309334 sec = 29.237 msec
Half stroke: 250 iter in 5.156117 sec = 20.624 msec
Quarter stroke: 500 iter in 8.147588 sec = 16.295 msec
Short forward: 400 iter in 2.544309 sec = 6.361 msec
Short backward: 400 iter in 2.007679 sec = 5.019 msec
Seq outer: 2048 iter in 0.392994 sec = 0.192 msec
Seq inner: 2048 iter in 0.332582 sec = 0.162 msec
Transfer rates:
outside: 102400 kbytes in 1.576734 sec = 64944 kbytes/sec
middle: 102400 kbytes in 1.381803 sec = 74106 kbytes/sec
inside: 102400 kbytes in 2.145432 sec = 47729 kbytes/sec

===============================================================

diskinfo -c -t /dev/ad8
/dev/ad8
512 # sectorsize
2000398934016 # mediasize in bytes (1.8T)
3907029168 # mediasize in sectors
3876021 # Cylinders according to firmware.
16 # Heads according to firmware.
63 # Sectors according to firmware.
WD-WCAVY1611513 # Disk ident.

I/O command overhead:
time to read 10MB block 0.176820 sec = 0.009 msec/sector
time to read 20480 sectors 2.966564 sec = 0.145 msec/sector
calculated command overhead = 0.136 msec/sector

Seek times:
Full stroke: 250 iter in 7.993339 sec = 31.973 msec
Half stroke: 250 iter in 5.944923 sec = 23.780 msec
Quarter stroke: 500 iter in 9.744406 sec = 19.489 msec
Short forward: 400 iter in 2.511171 sec = 6.278 msec
Short backward: 400 iter in 2.233714 sec = 5.584 msec
Seq outer: 2048 iter in 0.427523 sec = 0.209 msec
Seq inner: 2048 iter in 0.341185 sec = 0.167 msec
Transfer rates:
outside: 102400 kbytes in 1.516305 sec = 67533 kbytes/sec
middle: 102400 kbytes in 1.351877 sec = 75747 kbytes/sec
inside: 102400 kbytes in 2.090069 sec = 48994 kbytes/sec

===============================================================

The exact same disks, on the exact same machine, are well capable of
65+ mb/s throughput (tested with ATTO multiple times) with different
block sizes using Windows 2008 Server and NTFS. So what would be the
cause of these very low Bonnie result numbers in my case? Should I try
some other benchmark and if so, with what parameters?

- Sincerely,
Dan Naumov
Dan Naumov
2010-01-24 17:42:22 UTC
Permalink
Hi Dan,
I read on FreeBSD mailinglist you had some performance issues with ZFS.
Perhaps i can help you with that.
You seem to be running a single mirror, which means you won't have any speed
benefit regarding writes, and usually RAID1 implementations offer little to
no acceleration to read requests also; some even just read from the master
disk and don't touch the 'slave' mirrored disk unless when writing. ZFS is
alot more modern however, although i did not test performance of its mirror
implementation.
1) you use bonnie, but bonnie's tests are performed without a 'cooldown'
period between the tests; meaning that when test 2 starts, data from test 1
is still being processed. For single disks and simple I/O this is not so
bad, but for large write-back buffers and more complex I/O buffering, this
may be inappropriate. I had patched bonnie some time in the past, but if you
just want a MB/s number you can use DD for that.
2) The diskinfo tiny benchmark is single queue only i assume, meaning that
it would not scale well or at all on RAID-arrays. Actual filesystems on
RAID-arrays use multiple-queue; meaning it would not read one sector at a
time, but read 8 blocks (of 16KiB) "ahead"; this is called read-ahead and
for traditional UFS filesystems its controlled by the sysctl vfs.read_max
variable. ZFS works differently though, but you still need a "real"
benchmark.
3) You need low-latency hardware; in particular, no PCI controller should be
used. Only PCI-express based controllers or chipset-integrated Serial ATA
cotrollers have proper performance. PCI can hurt performance very badly, and
has high interrupt CPU usage. Generally you should avoid PCI. PCI-express is
fine though, its a completely different interface that is in many ways the
opposite of what PCI was.
4) Testing actual realistic I/O performance (in IOps) is very difficult. But
testing sequential performance should be alot easier. You may try using dd
for this.
dd if=/dev/ad4 of=/dev/null bs=1M count=1000
if=/dev/ad4 is the input file, the "read source"
of=/dev/null is the output file, the "write destination". /dev/null means it
just goes no-where; so this is a read-only benchmark
bs=1M is the blocksize, howmuch data to transfer per time. default is 512 or
the sector size; but that's very slow. A value between 64KiB and 1024KiB is
appropriate. bs=1M will select 1MiB or 1024KiB.
count=1000 means transfer 1000 pieces, and with bs=1M that means 1000 * 1MiB
= 1000MiB.
This example was raw reading sequentially from the start of the device
/dev/ad4. If you want to test RAIDs, you need to work at the filesystem
dd if=/dev/zero of=/path/to/ZFS/mount/zerofile.000 bs=1M count=2000
This command will read from /dev/zero (all zeroes) and write to a file on
ZFS-mounted filesystem, it will create the file "zerofile.000" and write
2000MiB of zeroes to that file.
So this command tests write-performance of the ZFS-mounted filesystem. To
test read performance, you need to clear caches first by unmounting that
filesystem and re-mounting it again. This would free up memory containing
parts of the filesystem as cached (reported in top as "Inact(ive)" instead
of "Free").
Please do make sure you double-check a dd command before running it, and run
as normal user instead of root. A wrong dd command may write to the wrong
destination and do things you don't want. The only real thing you need to
check is the write destination (of=....). That's where dd is going to write
to, so make sure its the target you intended. A common mistake made by
myself was to write dd of=... if=... (starting with of instead of if) and
thus actually doing something the other way around than what i was meant to
do. This can be disastrous if you work with live data, so be careful! ;-)
Hope any of this was helpful. During the dd benchmark, you can of course
open a second SSH terminal and start "gstat" to see the devices current I/O
stats.
Kind regards,
Jason
Hi and thanks for your tips, I appreciate it :)

[***@atombsd ~]$ dd if=/dev/zero of=/home/jago/test1 bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 36.206372 secs (29656156 bytes/sec)

[***@atombsd ~]$ dd if=/dev/zero of=/home/jago/test2 bs=1M count=4096
4096+0 records in
4096+0 records out
4294967296 bytes transferred in 143.878615 secs (29851325 bytes/sec)

This works out to 1GB in 36,2 seconds / 28,2mb/s in the first test and
4GB in 143.8 seconds / 28,4mb/s and somewhat consistent with the
bonnie results. It also sadly seems to confirm the very slow speed :(
The disks are attached to a 4-port Sil3124 controller and again, my
Windows benchmarks showing 65mb/s+ were done on exact same machine,
with same disks attached to the same controller. Only difference was
that in Windows the disks weren't in a mirror configuration but were
tested individually. I do understand that a mirror setup offers
roughly the same write speed as individual disk, while the read speed
usually varies from "equal to individual disk speed" to "nearly the
throughput of both disks combined" depending on the implementation,
but there is no obvious reason I am seeing why my setup offers both
read and write speeds roughly 1/3 to 1/2 of what the individual disks
are capable of. Dmesg shows:

atapci0: <SiI 3124 SATA300 controller> port 0x1000-0x100f mem
0x90108000-0x9010807f,0x90100000-0x90107fff irq 21 at device 0.0 on
pci4
ad8: 1907729MB <WDC WD20EADS-32R6B0 01.00A01> at ata4-master SATA300
ad10: 1907729MB <WDC WD20EADS-00R6B0 01.00A01> at ata5-master SATA300

I do recall also testing an alternative configuration in the past,
where I would boot off an UFS disk and have the ZFS mirror consist of
2 discs directly. The bonnie numbers in that case were in line with my
expectations, I was seeing 65-70mb/s. Note: again, exact same
hardware, exact same disks attached to the exact same controller. In
my knowledge, Solaris/OpenSolaris has an issue where they have to
automatically disable disk cache if ZFS is used on top of partitions
instead of raw disks, but to my knowledge (I recall reading this from
multiple reputable sources) this issue does not affect FreeBSD.

- Sincerely,
Dan Naumov
Dan Naumov
2010-01-24 18:12:29 UTC
Permalink
Post by Dan Naumov
Hi Dan,
I read on FreeBSD mailinglist you had some performance issues with ZFS.
Perhaps i can help you with that.
You seem to be running a single mirror, which means you won't have any speed
benefit regarding writes, and usually RAID1 implementations offer little to
no acceleration to read requests also; some even just read from the master
disk and don't touch the 'slave' mirrored disk unless when writing. ZFS is
alot more modern however, although i did not test performance of its mirror
implementation.
1) you use bonnie, but bonnie's tests are performed without a 'cooldown'
period between the tests; meaning that when test 2 starts, data from test 1
is still being processed. For single disks and simple I/O this is not so
bad, but for large write-back buffers and more complex I/O buffering, this
may be inappropriate. I had patched bonnie some time in the past, but if you
just want a MB/s number you can use DD for that.
2) The diskinfo tiny benchmark is single queue only i assume, meaning that
it would not scale well or at all on RAID-arrays. Actual filesystems on
RAID-arrays use multiple-queue; meaning it would not read one sector at a
time, but read 8 blocks (of 16KiB) "ahead"; this is called read-ahead and
for traditional UFS filesystems its controlled by the sysctl vfs.read_max
variable. ZFS works differently though, but you still need a "real"
benchmark.
3) You need low-latency hardware; in particular, no PCI controller should be
used. Only PCI-express based controllers or chipset-integrated Serial ATA
cotrollers have proper performance. PCI can hurt performance very badly, and
has high interrupt CPU usage. Generally you should avoid PCI. PCI-express is
fine though, its a completely different interface that is in many ways the
opposite of what PCI was.
4) Testing actual realistic I/O performance (in IOps) is very difficult. But
testing sequential performance should be alot easier. You may try using dd
for this.
dd if=/dev/ad4 of=/dev/null bs=1M count=1000
if=/dev/ad4 is the input file, the "read source"
of=/dev/null is the output file, the "write destination". /dev/null means it
just goes no-where; so this is a read-only benchmark
bs=1M is the blocksize, howmuch data to transfer per time. default is 512 or
the sector size; but that's very slow. A value between 64KiB and 1024KiB is
appropriate. bs=1M will select 1MiB or 1024KiB.
count=1000 means transfer 1000 pieces, and with bs=1M that means 1000 * 1MiB
= 1000MiB.
This example was raw reading sequentially from the start of the device
/dev/ad4. If you want to test RAIDs, you need to work at the filesystem
dd if=/dev/zero of=/path/to/ZFS/mount/zerofile.000 bs=1M count=2000
This command will read from /dev/zero (all zeroes) and write to a file on
ZFS-mounted filesystem, it will create the file "zerofile.000" and write
2000MiB of zeroes to that file.
So this command tests write-performance of the ZFS-mounted filesystem. To
test read performance, you need to clear caches first by unmounting that
filesystem and re-mounting it again. This would free up memory containing
parts of the filesystem as cached (reported in top as "Inact(ive)" instead
of "Free").
Please do make sure you double-check a dd command before running it, and run
as normal user instead of root. A wrong dd command may write to the wrong
destination and do things you don't want. The only real thing you need to
check is the write destination (of=....). That's where dd is going to write
to, so make sure its the target you intended. A common mistake made by
myself was to write dd of=... if=... (starting with of instead of if) and
thus actually doing something the other way around than what i was meant to
do. This can be disastrous if you work with live data, so be careful! ;-)
Hope any of this was helpful. During the dd benchmark, you can of course
open a second SSH terminal and start "gstat" to see the devices current I/O
stats.
Kind regards,
Jason
Hi and thanks for your tips, I appreciate it :)
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 36.206372 secs (29656156 bytes/sec)
4096+0 records in
4096+0 records out
4294967296 bytes transferred in 143.878615 secs (29851325 bytes/sec)
This works out to 1GB in 36,2 seconds / 28,2mb/s in the first test and
4GB in 143.8 seconds / 28,4mb/s and somewhat consistent with the
bonnie results. It also sadly seems to confirm the very slow speed :(
The disks are attached to a 4-port Sil3124 controller and again, my
Windows benchmarks showing 65mb/s+ were done on exact same machine,
with same disks attached to the same controller. Only difference was
that in Windows the disks weren't in a mirror configuration but were
tested individually. I do understand that a mirror setup offers
roughly the same write speed as individual disk, while the read speed
usually varies from "equal to individual disk speed" to "nearly the
throughput of both disks combined" depending on the implementation,
but there is no obvious reason I am seeing why my setup offers both
read and write speeds roughly 1/3 to 1/2 of what the individual disks
atapci0: <SiI 3124 SATA300 controller> port 0x1000-0x100f mem
0x90108000-0x9010807f,0x90100000-0x90107fff irq 21 at device 0.0 on
pci4
ad8: 1907729MB <WDC WD20EADS-32R6B0 01.00A01> at ata4-master SATA300
ad10: 1907729MB <WDC WD20EADS-00R6B0 01.00A01> at ata5-master SATA300
I do recall also testing an alternative configuration in the past,
where I would boot off an UFS disk and have the ZFS mirror consist of
2 discs directly. The bonnie numbers in that case were in line with my
expectations, I was seeing 65-70mb/s. Note: again, exact same
hardware, exact same disks attached to the exact same controller. In
my knowledge, Solaris/OpenSolaris has an issue where they have to
automatically disable disk cache if ZFS is used on top of partitions
instead of raw disks, but to my knowledge (I recall reading this from
multiple reputable sources) this issue does not affect FreeBSD.
- Sincerely,
Dan Naumov
To add some additional info, for good measure I decided to check if
disk write cache is enabled and sure enough it was:

[***@atombsd /var/log]$ sysctl hw.ata
hw.ata.setmax: 0
hw.ata.wc: 1
hw.ata.atapi_dma: 1
hw.ata.ata_dma_check_80pin: 1
hw.ata.ata_dma: 1

Also if you want to see/know the exact way the system was built and
installed, here is the build script I used:
http://jago.pp.fi/zfsinst.sh

The reason me (and this script) use MBR partitioning instead of GPT is
because my motherboard cannot reliably boot off GPT, but this should
not be really relevant to the performance issues shown.

- Sincerely,
Dan Naumov
Dan Naumov
2010-01-24 18:29:52 UTC
Permalink
On Sun, Jan 24, 2010 at 8:12 PM, Bob Friesenhahn
Post by Dan Naumov
This works out to 1GB in 36,2 seconds / 28,2mb/s in the first test and
4GB in 143.8 seconds / 28,4mb/s and somewhat consistent with the
bonnie results. It also sadly seems to confirm the very slow speed :(
The disks are attached to a 4-port Sil3124 controller and again, my
Windows benchmarks showing 65mb/s+ were done on exact same machine,
with same disks attached to the same controller. Only difference was
that in Windows the disks weren't in a mirror configuration but were
tested individually. I do understand that a mirror setup offers
roughly the same write speed as individual disk, while the read speed
usually varies from "equal to individual disk speed" to "nearly the
throughput of both disks combined" depending on the implementation,
but there is no obvious reason I am seeing why my setup offers both
read and write speeds roughly 1/3 to 1/2 of what the individual disks
There is a mistatement in the above in that a "mirror setup offers roughly
the same write speed as individual disk".  It is possible for a mirror setup
to offer a similar write speed to an individual disk, but it is also quite
possible to get 1/2 (or even 1/3) the speed. ZFS writes to a mirror pair
requires two independent writes.  If these writes go down independent I/O
paths, then there is hardly any overhead from the 2nd write.  If the writes
go through a bandwidth-limited shared path then they will contend for that
bandwidth and you will see much less write performance.
As a simple test, you can temporarily remove the mirror device from the pool
and see if the write performance dramatically improves. Before doing that,
it is useful to see the output of 'iostat -x 30' while under heavy write
load to see if one device shows a much higher svc_t value than the other.
Ow, ow, WHOA:

atombsd# zpool offline tank ad8s1a

[***@atombsd ~]$ dd if=/dev/zero of=/home/jago/test3 bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 16.826016 secs (63814382 bytes/sec)

Offlining one half of the mirror bumps DD write speed from 28mb/s to
64mb/s! Let's see how Bonnie results change:

Mirror with both parts attached:

-------Sequential Output-------- ---Sequential Input-- --Random--
-Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU
8192 18235 46.7 23137 19.9 13927 13.6 24818 49.3 44919 17.3 134.3 2.1

Mirror with 1 half offline:

-------Sequential Output-------- ---Sequential Input-- --Random--
-Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU
1024 22888 58.0 41832 35.1 22764 22.0 26775 52.3 54233 18.3 166.0 1.6

Ok, the Bonnie results have improved, but only very little.

- Sincerely,
Dan Naumov
Dan Naumov
2010-01-24 18:40:36 UTC
Permalink
ZFS writes to a mirror pair
requires two independent writes.  If these writes go down independent I/O
paths, then there is hardly any overhead from the 2nd write.  If the writes
go through a bandwidth-limited shared path then they will contend for that
bandwidth and you will see much less write performance.
What he said may confirm my suspicion on PCI. So if you could try the same
with "real" Serial ATA via chipset or PCI-e controller you can confirm this
story. I would be very interested. :P
Kind regards,
Jason
This wouldn't explain why ZFS mirror on 2 disks directly, on the exact
same controller (with the OS running off a separate disks) results in
"expected" performance, while having the OS run off/on a ZFS mirror
running on top of MBR-partitioned disks, on the same controller,
results in very low speed.

- Dan
Alexander Motin
2010-01-24 21:53:43 UTC
Permalink
Post by Dan Naumov
This works out to 1GB in 36,2 seconds / 28,2mb/s in the first test and
4GB in 143.8 seconds / 28,4mb/s and somewhat consistent with the
bonnie results. It also sadly seems to confirm the very slow speed :(
The disks are attached to a 4-port Sil3124 controller and again, my
Windows benchmarks showing 65mb/s+ were done on exact same machine,
with same disks attached to the same controller. Only difference was
that in Windows the disks weren't in a mirror configuration but were
tested individually. I do understand that a mirror setup offers
roughly the same write speed as individual disk, while the read speed
usually varies from "equal to individual disk speed" to "nearly the
throughput of both disks combined" depending on the implementation,
but there is no obvious reason I am seeing why my setup offers both
read and write speeds roughly 1/3 to 1/2 of what the individual disks
atapci0: <SiI 3124 SATA300 controller> port 0x1000-0x100f mem
0x90108000-0x9010807f,0x90100000-0x90107fff irq 21 at device 0.0 on
pci4
ad8: 1907729MB <WDC WD20EADS-32R6B0 01.00A01> at ata4-master SATA300
ad10: 1907729MB <WDC WD20EADS-00R6B0 01.00A01> at ata5-master SATA300
8.0-RELEASE, and especially 8-STABLE provide alternative, much more
functional driver for this controller, named siis(4). If your SiI3124
card installed into proper bus (PCI-X or PCIe x4/x8), it can be really
fast (up to 1GB/s was measured).
--
Alexander Motin
Dan Naumov
2010-01-25 00:14:27 UTC
Permalink
Post by Alexander Motin
Post by Dan Naumov
This works out to 1GB in 36,2 seconds / 28,2mb/s in the first test and
4GB in 143.8 seconds / 28,4mb/s and somewhat consistent with the
bonnie results. It also sadly seems to confirm the very slow speed :(
The disks are attached to a 4-port Sil3124 controller and again, my
Windows benchmarks showing 65mb/s+ were done on exact same machine,
with same disks attached to the same controller. Only difference was
that in Windows the disks weren't in a mirror configuration but were
tested individually. I do understand that a mirror setup offers
roughly the same write speed as individual disk, while the read speed
usually varies from "equal to individual disk speed" to "nearly the
throughput of both disks combined" depending on the implementation,
but there is no obvious reason I am seeing why my setup offers both
read and write speeds roughly 1/3 to 1/2 of what the individual disks
atapci0: <SiI 3124 SATA300 controller> port 0x1000-0x100f mem
0x90108000-0x9010807f,0x90100000-0x90107fff irq 21 at device 0.0 on
pci4
ad8: 1907729MB <WDC WD20EADS-32R6B0 01.00A01> at ata4-master SATA300
ad10: 1907729MB <WDC WD20EADS-00R6B0 01.00A01> at ata5-master SATA300
8.0-RELEASE, and especially 8-STABLE provide alternative, much more
functional driver for this controller, named siis(4). If your SiI3124
card installed into proper bus (PCI-X or PCIe x4/x8), it can be really
fast (up to 1GB/s was measured).
--
Alexander Motin
Sadly, it seems that utilizing the new siis driver doesn't do much good:

Before utilizing siis:

iozone -s 4096M -r 512 -i0 -i1
random
random bkwd record stride
KB reclen write rewrite read reread read
write read rewrite read fwrite frewrite fread freread
4194304 512 28796 28766 51610 50695

After enabling siis in loader.conf (and ensuring the disks show up as ada):

iozone -s 4096M -r 512 -i0 -i1

random
random bkwd record stride
KB reclen write rewrite read reread read
write read rewrite read fwrite frewrite fread freread
4194304 512 28781 28897 47214 50540

I've checked with the manufacturer and it seems that the Sil3124 in
this NAS is indeed a PCI card. More info on the card in question is
available at http://green-pcs.co.uk/2009/01/28/tranquil-bbs2-those-pci-cards/
I have the card described later on the page, the one with 4 SATA ports
and no eSATA. Alright, so it being PCI is probably a bottleneck in
some ways, but that still doesn't explain the performance THAT bad,
considering that same hardware, same disks, same disk controller push
over 65mb/s in both reads and writes in Win2008. And agian, I am
pretty sure that I've had "close to expected" results when I was
booting an UFS FreeBSD installation off an SSD (attached directly to
SATA port on the motherboard) while running the same kinds of
benchmarks with Bonnie and DD on a ZFS mirror made directly on top of
2 raw disks.


- Sincerely,
Dan Naumov
Dan Naumov
2010-01-25 00:29:49 UTC
Permalink
Post by Dan Naumov
Post by Alexander Motin
Post by Dan Naumov
This works out to 1GB in 36,2 seconds / 28,2mb/s in the first test and
4GB in 143.8 seconds / 28,4mb/s and somewhat consistent with the
bonnie results. It also sadly seems to confirm the very slow speed :(
The disks are attached to a 4-port Sil3124 controller and again, my
Windows benchmarks showing 65mb/s+ were done on exact same machine,
with same disks attached to the same controller. Only difference was
that in Windows the disks weren't in a mirror configuration but were
tested individually. I do understand that a mirror setup offers
roughly the same write speed as individual disk, while the read speed
usually varies from "equal to individual disk speed" to "nearly the
throughput of both disks combined" depending on the implementation,
but there is no obvious reason I am seeing why my setup offers both
read and write speeds roughly 1/3 to 1/2 of what the individual disks
atapci0: <SiI 3124 SATA300 controller> port 0x1000-0x100f mem
0x90108000-0x9010807f,0x90100000-0x90107fff irq 21 at device 0.0 on
pci4
ad8: 1907729MB <WDC WD20EADS-32R6B0 01.00A01> at ata4-master SATA300
ad10: 1907729MB <WDC WD20EADS-00R6B0 01.00A01> at ata5-master SATA300
8.0-RELEASE, and especially 8-STABLE provide alternative, much more
functional driver for this controller, named siis(4). If your SiI3124
card installed into proper bus (PCI-X or PCIe x4/x8), it can be really
fast (up to 1GB/s was measured).
--
Alexander Motin
iozone -s 4096M -r 512 -i0 -i1
                                                           random
random    bkwd   record   stride
             KB  reclen   write rewrite    read    reread    read
write    read  rewrite     read   fwrite frewrite   fread  freread
        4194304     512   28796   28766    51610    50695
iozone -s 4096M -r 512 -i0 -i1
                                                           random
random    bkwd   record   stride
             KB  reclen   write rewrite    read    reread    read
write    read  rewrite     read   fwrite frewrite   fread  freread
        4194304     512   28781   28897    47214    50540
Just to add to the numbers above, exact same benchmark, on 1 disk
(detached 2nd disk from the mirror) while using the siis driver:

random
random bkwd record stride
KB reclen write rewrite read reread read
write read rewrite read fwrite frewrite fread freread
4194304 512 57760 56371 68867 74047


- Dan
Alexander Motin
2010-01-25 07:00:54 UTC
Permalink
Post by Dan Naumov
Post by Dan Naumov
Post by Alexander Motin
Post by Dan Naumov
This works out to 1GB in 36,2 seconds / 28,2mb/s in the first test and
4GB in 143.8 seconds / 28,4mb/s and somewhat consistent with the
bonnie results. It also sadly seems to confirm the very slow speed :(
The disks are attached to a 4-port Sil3124 controller and again, my
Windows benchmarks showing 65mb/s+ were done on exact same machine,
with same disks attached to the same controller. Only difference was
that in Windows the disks weren't in a mirror configuration but were
tested individually. I do understand that a mirror setup offers
roughly the same write speed as individual disk, while the read speed
usually varies from "equal to individual disk speed" to "nearly the
throughput of both disks combined" depending on the implementation,
but there is no obvious reason I am seeing why my setup offers both
read and write speeds roughly 1/3 to 1/2 of what the individual disks
atapci0: <SiI 3124 SATA300 controller> port 0x1000-0x100f mem
0x90108000-0x9010807f,0x90100000-0x90107fff irq 21 at device 0.0 on
pci4
ad8: 1907729MB <WDC WD20EADS-32R6B0 01.00A01> at ata4-master SATA300
ad10: 1907729MB <WDC WD20EADS-00R6B0 01.00A01> at ata5-master SATA300
8.0-RELEASE, and especially 8-STABLE provide alternative, much more
functional driver for this controller, named siis(4). If your SiI3124
card installed into proper bus (PCI-X or PCIe x4/x8), it can be really
fast (up to 1GB/s was measured).
--
Alexander Motin
iozone -s 4096M -r 512 -i0 -i1
random
random bkwd record stride
KB reclen write rewrite read reread read
write read rewrite read fwrite frewrite fread freread
4194304 512 28796 28766 51610 50695
iozone -s 4096M -r 512 -i0 -i1
random
random bkwd record stride
KB reclen write rewrite read reread read
write read rewrite read fwrite frewrite fread freread
4194304 512 28781 28897 47214 50540
Just to add to the numbers above, exact same benchmark, on 1 disk
random
random bkwd record stride
KB reclen write rewrite read reread read
write read rewrite read fwrite frewrite fread freread
4194304 512 57760 56371 68867 74047
If both parts of mirror uses same controller, it doubles it's bus
traffic. That may reduce bandwidth twice.

The main benefit of siis(4) is a command queuing. You should receive
more benefits on multithread random I/O.
--
Alexander Motin
Bob Friesenhahn
2010-01-25 05:33:07 UTC
Permalink
Post by Dan Naumov
I've checked with the manufacturer and it seems that the Sil3124 in
this NAS is indeed a PCI card. More info on the card in question is
available at http://green-pcs.co.uk/2009/01/28/tranquil-bbs2-those-pci-cards/
I have the card described later on the page, the one with 4 SATA ports
and no eSATA. Alright, so it being PCI is probably a bottleneck in
some ways, but that still doesn't explain the performance THAT bad,
considering that same hardware, same disks, same disk controller push
over 65mb/s in both reads and writes in Win2008. And agian, I am
pretty sure that I've had "close to expected" results when I was
The slow PCI bus and this card look like the bottleneck to me.
Remember that your Win2008 tests were with just one disk, your zfs
performance with just one disk was similar to Win2008, and your zfs
performance with a mirror was just under 1/2 that.

I don't think that your performance results are necessarily out of
line for the hardware you are using.

On an old Sun SPARC workstation with retrofitted 15K RPM drives on
Ultra-160 SCSI channel, I see a zfs mirror write performance of
67,317KB/second and a read performance of 124,347KB/second. The
drives themselves are capable of 100MB/second range performance.
Similar to yourself, I see 1/2 the write performance due to bandwidth
limitations.

Bob
--
Bob Friesenhahn
***@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Dan Naumov
2010-01-25 07:34:52 UTC
Permalink
On Mon, Jan 25, 2010 at 7:33 AM, Bob Friesenhahn
Post by Dan Naumov
I've checked with the manufacturer and it seems that the Sil3124 in
this NAS is indeed a PCI card. More info on the card in question is
available at
http://green-pcs.co.uk/2009/01/28/tranquil-bbs2-those-pci-cards/
I have the card described later on the page, the one with 4 SATA ports
and no eSATA. Alright, so it being PCI is probably a bottleneck in
some ways, but that still doesn't explain the performance THAT bad,
considering that same hardware, same disks, same disk controller push
over 65mb/s in both reads and writes in Win2008. And agian, I am
pretty sure that I've had "close to expected" results when I was
The slow PCI bus and this card look like the bottleneck to me. Remember that
your Win2008 tests were with just one disk, your zfs performance with just
one disk was similar to Win2008, and your zfs performance with a mirror was
just under 1/2 that.
I don't think that your performance results are necessarily out of line for
the hardware you are using.
On an old Sun SPARC workstation with retrofitted 15K RPM drives on Ultra-160
SCSI channel, I see a zfs mirror write performance of 67,317KB/second and a
read performance of 124,347KB/second.  The drives themselves are capable of
100MB/second range performance. Similar to yourself, I see 1/2 the write
performance due to bandwidth limitations.
Bob
There is lots of very sweet irony in my particular situiation.
Initially I was planning to use a single X25-M 80gb SSD in the
motherboard sata port for the actual OS installation as well as to
dedicate 50gb of it to a become a designaed L2ARC vdev for my ZFS
mirrors. The SSD attached to the motherboard port would be recognized
only as a SATA150 device for some reason, but I was still seeing
150mb/s throughput and sub 0.1 ms latencies on that disk simply
because of how crazy good the X25-M's are. However I ended up having
very bad issues with the Icydock 2,5" to 3,5" converter jacket I was
using to keep/fit the SSD in the system and it would randomly drop
write IO on heavy load due to bad connectors. Having finally figured
out the cause of my OS installations to the SSD going belly up during
applying updates, I decided to move the SSD to my desktop and use it
there instead, additionally thinking that my perhaps my idea of the
SSD was crazy overkill for what I need the system to do. Ironically
now that I am seeing how horrible the performance is when I am
operating on the mirror through this PCI card, I realize that
actually, my idea was pretty bloody brilliant, I just didn't really
know why at the time.

An L2ARC device on the motherboard port would really help me with
random read IO, but to work around the utterly poor write performance,
I would also need a dedicaled SLOG ZIL device. The catch is that while
L2ARC devices and be removed from the pool at will (should the device
up and die all of a sudden), the dedicated ZILs cannot and currently a
"missing" ZIL device will render the pool it's included in be unable
to import and become inaccessible. There is some work happening in
Solaris to implement removing SLOGs from a pool, but that work hasn't
yet found it's way in FreeBSD yet.


- Sincerely,
Dan Naumov

- Sincerely,
Dan Naumov
Dan Naumov
2010-01-25 08:32:19 UTC
Permalink
Post by Dan Naumov
On Mon, Jan 25, 2010 at 7:33 AM, Bob Friesenhahn
Post by Dan Naumov
I've checked with the manufacturer and it seems that the Sil3124 in
this NAS is indeed a PCI card. More info on the card in question is
available at
http://green-pcs.co.uk/2009/01/28/tranquil-bbs2-those-pci-cards/
I have the card described later on the page, the one with 4 SATA ports
and no eSATA. Alright, so it being PCI is probably a bottleneck in
some ways, but that still doesn't explain the performance THAT bad,
considering that same hardware, same disks, same disk controller push
over 65mb/s in both reads and writes in Win2008. And agian, I am
pretty sure that I've had "close to expected" results when I was
The slow PCI bus and this card look like the bottleneck to me. Remember that
your Win2008 tests were with just one disk, your zfs performance with just
one disk was similar to Win2008, and your zfs performance with a mirror was
just under 1/2 that.
I don't think that your performance results are necessarily out of line for
the hardware you are using.
On an old Sun SPARC workstation with retrofitted 15K RPM drives on Ultra-160
SCSI channel, I see a zfs mirror write performance of 67,317KB/second and a
read performance of 124,347KB/second.  The drives themselves are capable of
100MB/second range performance. Similar to yourself, I see 1/2 the write
performance due to bandwidth limitations.
Bob
There is lots of very sweet irony in my particular situiation.
Initially I was planning to use a single X25-M 80gb SSD in the
motherboard sata port for the actual OS installation as well as to
dedicate 50gb of it to a become a designaed L2ARC vdev for my ZFS
mirrors. The SSD attached to the motherboard port would be recognized
only as a SATA150 device for some reason, but I was still seeing
150mb/s throughput and sub 0.1 ms latencies on that disk simply
because of how crazy good the X25-M's are. However I ended up having
very bad issues with the Icydock 2,5" to 3,5" converter jacket I was
using to keep/fit the SSD in the system and it would randomly drop
write IO on heavy load due to bad connectors. Having finally figured
out the cause of my OS installations to the SSD going belly up during
applying updates, I decided to move the SSD to my desktop and use it
there instead, additionally thinking that my perhaps my idea of the
SSD was crazy overkill for what I need the system to do. Ironically
now that I am seeing how horrible the performance is when I am
operating on the mirror through this PCI card, I realize that
actually, my idea was pretty bloody brilliant, I just didn't really
know why at the time.
An L2ARC device on the motherboard port would really help me with
random read IO, but to work around the utterly poor write performance,
I would also need a dedicaled SLOG ZIL device. The catch is that while
L2ARC devices and be removed from the pool at will (should the device
up and die all of a sudden), the dedicated ZILs cannot and currently a
"missing" ZIL device will render the pool it's included in be unable
to import and become inaccessible. There is some work happening in
Solaris to implement removing SLOGs from a pool, but that work hasn't
yet found it's way in FreeBSD yet.
- Sincerely,
Dan Naumov
OK final question: if/when I go about adding more disks to the system
and want redundancy, am I right in thinking that: ZFS pool of
disk1+disk2 mirror + disk3+disk4 mirror (a la RAID10) would completely
murder my write and read performance even way below the current 28mb/s
/ 50mb/s I am seeing with 2 disks on that PCI controller and that in
order to have the least negative impact, I should simply have 2
independent mirrors in 2 independent pools (with the 5th disk slot in
the NAS given to a non-redundant single disk running off the one
available SATA port on the motherboard)?

- Sincerely,
Dan Naumov
Thomas Burgess
2010-01-25 08:58:48 UTC
Permalink
It depends on the bandwidth of the bus that it is on and the controller
itself.

I like to use pci-x with aoc-sat2-mv8 cards or pci-e cards....that way you
get a lot more bandwidth..
Post by Bob Friesenhahn
Post by Dan Naumov
On Mon, Jan 25, 2010 at 7:33 AM, Bob Friesenhahn
Post by Dan Naumov
I've checked with the manufacturer and it seems that the Sil3124 in
this NAS is indeed a PCI card. More info on the card in question is
available at
http://green-pcs.co.uk/2009/01/28/tranquil-bbs2-those-pci-cards/
I have the card described later on the page, the one with 4 SATA ports
and no eSATA. Alright, so it being PCI is probably a bottleneck in
some ways, but that still doesn't explain the performance THAT bad,
considering that same hardware, same disks, same disk controller push
over 65mb/s in both reads and writes in Win2008. And agian, I am
pretty sure that I've had "close to expected" results when I was
The slow PCI bus and this card look like the bottleneck to me. Remember
that
Post by Dan Naumov
your Win2008 tests were with just one disk, your zfs performance with
just
Post by Dan Naumov
one disk was similar to Win2008, and your zfs performance with a mirror
was
Post by Dan Naumov
just under 1/2 that.
I don't think that your performance results are necessarily out of line
for
Post by Dan Naumov
the hardware you are using.
On an old Sun SPARC workstation with retrofitted 15K RPM drives on
Ultra-160
Post by Dan Naumov
SCSI channel, I see a zfs mirror write performance of 67,317KB/second
and a
Post by Dan Naumov
read performance of 124,347KB/second. The drives themselves are capable
of
Post by Dan Naumov
100MB/second range performance. Similar to yourself, I see 1/2 the write
performance due to bandwidth limitations.
Bob
There is lots of very sweet irony in my particular situiation.
Initially I was planning to use a single X25-M 80gb SSD in the
motherboard sata port for the actual OS installation as well as to
dedicate 50gb of it to a become a designaed L2ARC vdev for my ZFS
mirrors. The SSD attached to the motherboard port would be recognized
only as a SATA150 device for some reason, but I was still seeing
150mb/s throughput and sub 0.1 ms latencies on that disk simply
because of how crazy good the X25-M's are. However I ended up having
very bad issues with the Icydock 2,5" to 3,5" converter jacket I was
using to keep/fit the SSD in the system and it would randomly drop
write IO on heavy load due to bad connectors. Having finally figured
out the cause of my OS installations to the SSD going belly up during
applying updates, I decided to move the SSD to my desktop and use it
there instead, additionally thinking that my perhaps my idea of the
SSD was crazy overkill for what I need the system to do. Ironically
now that I am seeing how horrible the performance is when I am
operating on the mirror through this PCI card, I realize that
actually, my idea was pretty bloody brilliant, I just didn't really
know why at the time.
An L2ARC device on the motherboard port would really help me with
random read IO, but to work around the utterly poor write performance,
I would also need a dedicaled SLOG ZIL device. The catch is that while
L2ARC devices and be removed from the pool at will (should the device
up and die all of a sudden), the dedicated ZILs cannot and currently a
"missing" ZIL device will render the pool it's included in be unable
to import and become inaccessible. There is some work happening in
Solaris to implement removing SLOGs from a pool, but that work hasn't
yet found it's way in FreeBSD yet.
- Sincerely,
Dan Naumov
OK final question: if/when I go about adding more disks to the system
and want redundancy, am I right in thinking that: ZFS pool of
disk1+disk2 mirror + disk3+disk4 mirror (a la RAID10) would completely
murder my write and read performance even way below the current 28mb/s
/ 50mb/s I am seeing with 2 disks on that PCI controller and that in
order to have the least negative impact, I should simply have 2
independent mirrors in 2 independent pools (with the 5th disk slot in
the NAS given to a non-redundant single disk running off the one
available SATA port on the motherboard)?
- Sincerely,
Dan Naumov
_______________________________________________
http://lists.freebsd.org/mailman/listinfo/freebsd-fs
Pete French
2010-01-25 11:29:15 UTC
Permalink
Post by Thomas Burgess
I like to use pci-x with aoc-sat2-mv8 cards or pci-e cards....that way you
get a lot more bandwidth..
I would goalong with that - I have precisely the same controller, with
a pair of eSATA drives, running ZFS mirrored. But I get a nice 100
meg/second out of them if I try. My controller is, however on PCI-X, not
PCI. It's a shame PCI-X appears to have gone the way of the dinosaur :-(

-pete.
Artem Belevich
2010-01-25 17:04:00 UTC
Permalink
aoc-sat2-mv8 was somewhat slower compared to ICH9 or LSI1068
controllers when I tried it with 6 and 8 disks.
I think the problem is that MV8 only does 32K per transfer and that
does seem to matter when you have 8 drives hooked up to it. I don't
have hard numbers, but peak throughput of MV8 with 8-disk raidz2 was
noticeably lower than that of LSI1068 in the same configuration. Both
LSI1068 and MV2 were on the same PCI-X bus. It could be a driver
limitation. The driver for Marvel SATA controllers in NetBSD seems a
bit more advanced compared to what's in FreeBSD.

I wish intel would make cheap multi-port PCIe SATA card based on their
AHCI controllers.

--Artem

On Mon, Jan 25, 2010 at 3:29 AM, Pete French
Post by Pete French
Post by Thomas Burgess
I like to use pci-x with aoc-sat2-mv8 cards or pci-e cards....that way you
get a lot more bandwidth..
I would goalong with that - I have precisely the same controller, with
a pair of eSATA drives, running ZFS mirrored. But I get a nice 100
meg/second out of them if I try. My controller is, however on PCI-X, not
PCI. It's a shame PCI-X appears to have gone the way of the dinosaur :-(
-pete.
_______________________________________________
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
Alexander Motin
2010-01-25 17:40:50 UTC
Permalink
Post by Artem Belevich
aoc-sat2-mv8 was somewhat slower compared to ICH9 or LSI1068
controllers when I tried it with 6 and 8 disks.
I think the problem is that MV8 only does 32K per transfer and that
does seem to matter when you have 8 drives hooked up to it. I don't
have hard numbers, but peak throughput of MV8 with 8-disk raidz2 was
noticeably lower than that of LSI1068 in the same configuration. Both
LSI1068 and MV2 were on the same PCI-X bus. It could be a driver
limitation. The driver for Marvel SATA controllers in NetBSD seems a
bit more advanced compared to what's in FreeBSD.
I also wouldn't recommend to use Marvell 88SXx0xx controllers now. While
potentially they are interesting, lack of documentation and numerous
hardware bugs make existing FreeBSD driver very limited there.
Post by Artem Belevich
I wish intel would make cheap multi-port PCIe SATA card based on their
AHCI controllers.
Indeed. Intel on-board AHCI SATA controllers are fastest from all I have
tested. Unluckily, they are not producing discrete versions. :(

Now, if discrete solution is really needed, I would still recommend
SiI3124, but with proper PCI-X 64bit/133MHz bus or built-in PCIe x8
bridge. They are fast and have good new siis driver.
Post by Artem Belevich
On Mon, Jan 25, 2010 at 3:29 AM, Pete French
Post by Pete French
Post by Thomas Burgess
I like to use pci-x with aoc-sat2-mv8 cards or pci-e cards....that way you
get a lot more bandwidth..
I would goalong with that - I have precisely the same controller, with
a pair of eSATA drives, running ZFS mirrored. But I get a nice 100
meg/second out of them if I try. My controller is, however on PCI-X, not
PCI. It's a shame PCI-X appears to have gone the way of the dinosaur :-(
--
Alexander Motin
Dan Naumov
2010-01-25 18:02:58 UTC
Permalink
Post by Alexander Motin
Post by Artem Belevich
aoc-sat2-mv8 was somewhat slower compared to ICH9 or LSI1068
controllers when I tried it with 6 and 8 disks.
I think the problem is that MV8 only does 32K per transfer and that
does seem to matter when you have 8 drives hooked up to it. I don't
have hard numbers, but peak throughput of MV8 with 8-disk raidz2 was
noticeably lower than that of LSI1068 in the same configuration. Both
LSI1068 and MV2 were on the same PCI-X bus. It could be a driver
limitation. The driver for Marvel SATA controllers in NetBSD seems a
bit more advanced compared to what's in FreeBSD.
I also wouldn't recommend to use Marvell 88SXx0xx controllers now. While
potentially they are interesting, lack of documentation and numerous
hardware bugs make existing FreeBSD driver very limited there.
Post by Artem Belevich
I wish intel would make cheap multi-port PCIe SATA card based on their
AHCI controllers.
Indeed. Intel on-board AHCI SATA controllers are fastest from all I have
tested. Unluckily, they are not producing discrete versions. :(
Now, if discrete solution is really needed, I would still recommend
SiI3124, but with proper PCI-X 64bit/133MHz bus or built-in PCIe x8
bridge. They are fast and have good new siis driver.
Post by Artem Belevich
On Mon, Jan 25, 2010 at 3:29 AM, Pete French
Post by Pete French
Post by Thomas Burgess
I like to use pci-x with aoc-sat2-mv8 cards or pci-e cards....that way you
get a lot more bandwidth..
I would goalong with that - I have precisely the same controller, with
a pair of eSATA drives, running ZFS mirrored. But I get a nice 100
meg/second out of them if I try. My controller is, however on PCI-X, not
PCI. It's a shame PCI-X appears to have gone the way of the dinosaur :-(
--
Alexander Motin
Alexander, since you seem to be experienced in the area, what do you
think of these 2 for use in a FreeBSD8 ZFS NAS:

http://www.supermicro.com/products/motherboard/ATOM/ICH9/X7SPA.cfm?typ=H
http://www.supermicro.com/products/motherboard/ATOM/ICH9/X7SPA.cfm?typ=H&IPMI=Y

- Sincerely,
Dan Naumov
Alexander Motin
2010-01-25 18:32:33 UTC
Permalink
Post by Dan Naumov
Alexander, since you seem to be experienced in the area, what do you
http://www.supermicro.com/products/motherboard/ATOM/ICH9/X7SPA.cfm?typ=H
http://www.supermicro.com/products/motherboard/ATOM/ICH9/X7SPA.cfm?typ=H&IPMI=Y
Unluckily I haven't yet touched Atom family close yet, so I can't say
about it's performance. But higher desktop level (even bit old) ICH9R
chipset there is IMHO a good option. It is MUCH better then ICH7, often
used with previous Atoms. If I had nice small Mini-ITX case with 6 drive
bays, I would definitely look for some board like that to build home
storage.
--
Alexander Motin
Dan Naumov
2010-01-25 18:39:01 UTC
Permalink
Post by Alexander Motin
Post by Dan Naumov
Alexander, since you seem to be experienced in the area, what do you
http://www.supermicro.com/products/motherboard/ATOM/ICH9/X7SPA.cfm?typ=H
http://www.supermicro.com/products/motherboard/ATOM/ICH9/X7SPA.cfm?typ=H&IPMI=Y
Unluckily I haven't yet touched Atom family close yet, so I can't say
about it's performance. But higher desktop level (even bit old) ICH9R
chipset there is IMHO a good option. It is MUCH better then ICH7, often
used with previous Atoms. If I had nice small Mini-ITX case with 6 drive
bays, I would definitely look for some board like that to build home
storage.
--
Alexander Motin
CPU-performance-wise, I am not really worried. The current system is
an Atom 330 and even that is a bit overkill for what I do with it and
from what I am seeing, the new Atom D510 used on those boards is a
tiny bit faster. What I want and care about for this system are
reliability, stability, low power use, quietness and fast disk
read/write speeds. I've been hearing some praise of ICH9R and 6 native
SATA ports should be enough for my needs. AFAIK, the Intel 82574L
network cards included on those are also very well supported?

- Sincerely,
Dan Naumov
Chris Whitehouse
2010-01-25 20:34:38 UTC
Permalink
Post by Dan Naumov
CPU-performance-wise, I am not really worried. The current system is
an Atom 330 and even that is a bit overkill for what I do with it and
from what I am seeing, the new Atom D510 used on those boards is a
tiny bit faster. What I want and care about for this system are
reliability, stability, low power use, quietness and fast disk
read/write speeds. I've been hearing some praise of ICH9R and 6 native
SATA ports should be enough for my needs. AFAIK, the Intel 82574L
network cards included on those are also very well supported?
These might be interesting then
www.fit-pc.com
The Intel US15W SCH chipset or System Controller Hub as it's called is
mentioned in hardware notes for 8.0R and 7.2R but only for snd_hda, I
don't know if this means other functions are supported or not. This
thread says it is supported
http://mail-index.netbsd.org/port-i386/2010/01/03/msg001695.html

Chris

ps I removed some of the recipients from the recipients list as my
original post was held for moderation because of "Too many recipients to
the message"
Post by Dan Naumov
- Sincerely,
Dan Naumov
_______________________________________________
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
Alexander Motin
2010-01-25 20:34:53 UTC
Permalink
Post by Chris Whitehouse
Post by Dan Naumov
CPU-performance-wise, I am not really worried. The current system is
an Atom 330 and even that is a bit overkill for what I do with it and
from what I am seeing, the new Atom D510 used on those boards is a
tiny bit faster. What I want and care about for this system are
reliability, stability, low power use, quietness and fast disk
read/write speeds. I've been hearing some praise of ICH9R and 6 native
SATA ports should be enough for my needs. AFAIK, the Intel 82574L
network cards included on those are also very well supported?
These might be interesting then
www.fit-pc.com
The Intel US15W SCH chipset or System Controller Hub as it's called is
mentioned in hardware notes for 8.0R and 7.2R but only for snd_hda, I
don't know if this means other functions are supported or not. This
thread says it is supported
http://mail-index.netbsd.org/port-i386/2010/01/03/msg001695.html
Intel US15W (SCH) chipset heavily stripped and tuned for netbooks. It
has no SATA, only one PATA channel. It is mostly supported by FreeBSD,
but with exception of video, which makes it close to useless. it has
only one benefit - low power consumption.
--
Alexander Motin
Chris Whitehouse
2010-01-25 20:57:20 UTC
Permalink
Post by Alexander Motin
Post by Chris Whitehouse
Post by Dan Naumov
CPU-performance-wise, I am not really worried. The current system is
an Atom 330 and even that is a bit overkill for what I do with it and
from what I am seeing, the new Atom D510 used on those boards is a
tiny bit faster. What I want and care about for this system are
reliability, stability, low power use, quietness and fast disk
read/write speeds. I've been hearing some praise of ICH9R and 6 native
SATA ports should be enough for my needs. AFAIK, the Intel 82574L
network cards included on those are also very well supported?
These might be interesting then
www.fit-pc.com
The Intel US15W SCH chipset or System Controller Hub as it's called is
mentioned in hardware notes for 8.0R and 7.2R but only for snd_hda, I
don't know if this means other functions are supported or not. This
thread says it is supported
http://mail-index.netbsd.org/port-i386/2010/01/03/msg001695.html
Intel US15W (SCH) chipset heavily stripped and tuned for netbooks. It
has no SATA, only one PATA channel. It is mostly supported by FreeBSD,
but with exception of video, which makes it close to useless. it has
only one benefit - low power consumption.
The intel spec sheet does say single PATA but according to the fit-pc
website it has SATA and miniSD. Still as you say without video support
it's not much use, which is useful to know as I had been looking at
these. Ok I will go away now :O

Chris
Chris Whitehouse
2010-01-25 20:26:32 UTC
Permalink
Post by Dan Naumov
CPU-performance-wise, I am not really worried. The current system is
an Atom 330 and even that is a bit overkill for what I do with it and
from what I am seeing, the new Atom D510 used on those boards is a
tiny bit faster. What I want and care about for this system are
reliability, stability, low power use, quietness and fast disk
read/write speeds. I've been hearing some praise of ICH9R and 6 native
SATA ports should be enough for my needs. AFAIK, the Intel 82574L
network cards included on those are also very well supported?
These might be interesting then
www.fit-pc.com
The Intel US15W SCH chipset or System Controller Hub as it's called is
mentioned in hardware notes for 8.0R and 7.2R but only for snd_hda, I
don't know if this means other functions are supported or not. This
thread says it is supported
http://mail-index.netbsd.org/port-i386/2010/01/03/msg001695.html

Chris
Post by Dan Naumov
- Sincerely,
Dan Naumov
_______________________________________________
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
Daniel O'Connor
2010-01-25 23:15:57 UTC
Permalink
Post by Dan Naumov
CPU-performance-wise, I am not really worried. The current system is
an Atom 330 and even that is a bit overkill for what I do with it and
from what I am seeing, the new Atom D510 used on those boards is a
tiny bit faster. What I want and care about for this system are
reliability, stability, low power use, quietness and fast disk
read/write speeds. I've been hearing some praise of ICH9R and 6
native SATA ports should be enough for my needs. AFAIK, the Intel
82574L network cards included on those are also very well supported?
You might want to consider an Athlon (maybe underclock it) - the AMD IXP
700/800 south bridge seems to work well with FreeBSD (in my
experience).

These boards (eg Gigabyte GA-MA785GM-US2H) have 6 SATA ports (one may be
eSATA though) and PATA, they seem ideal really.. You can use PATA with
CF to boot and connect 5 disks plus a DVD drive.

The CPU is not fanless however, but the other stuff is, on the plus side
you won't have to worry about CPU power :)

Also, the onboard video works well with radeonhd and is quite fast.

One other downside is the onboard network isn't great (Realtek) but I
put an em card in mine.
--
Daniel O'Connor software and network engineer
for Genesis Software - http://www.gsoft.com.au
"The nice thing about standards is that there
are so many of them to choose from."
-- Andrew Tanenbaum
GPG Fingerprint - 5596 B766 97C0 0E94 4347 295E E593 DC20 7B3F CE8C
James R. Van Artsdalen
2010-02-03 10:15:06 UTC
Permalink
Post by Dan Naumov
4096+0 records in
4096+0 records out
4294967296 bytes transferred in 143.878615 secs (29851325 bytes/sec)
This works out to 1GB in 36,2 seconds / 28,2mb/s in the first test and
4GB in 143.8 seconds / 28,4mb/s
For the record, better results can be seen. In my test I put 3 Seagate
Barracuda XT drives in a port multiplier and connected that to one port
of a PCIe 3124 card.

The MIRROR case is at about the I/O bandwidth limit of those drives.

[***@kraken ~]# zpool create tmpx ada{2,3,4}
[***@kraken ~]# dd if=/dev/zero of=/tmpx/test2 bs=1M count=4096
4096+0 records in
4096+0 records out
4294967296 bytes transferred in 20.892818 secs (205571470 bytes/sec)
[***@kraken ~]# zpool destroy tmpx
[***@kraken ~]# zpool create tmpx mirror ada{2,3}
[***@kraken ~]# dd if=/dev/zero of=/tmpx/test2 bs=1M count=4096
4096+0 records in
4096+0 records out
4294967296 bytes transferred in 36.432818 secs (117887321 bytes/sec)
[***@kraken ~]#

Continue reading on narkive:
Loading...