With the same 5-drive RAIDZ setup under FreeBSD 7.2-RELEASE as in the earlier benchmarks, I first created a text file containing the sentence "This is a test of data corruption in live filesystems." I saved that text file to the root of the RAIDZ, then unmounted the pool, then kldunloaded the ZFS module from FreeBSD's kernel.
Next, I used a small Perl script to look through /dev/ad8 - one of the five physical drives in the array - to find the sentence above. Having found it, I then did a raw write to /dev/ad8 changing "This" to "Tgjs". Presto, data corruption!
Now, I re-kldloaded zfs.ko, then re-mounted the pool, and did a quick cat test.txt:
# cat /backup/test.txt
This is a test of data corruption in live filesystems.
#
Excellent - ZFS did in fact catch and heal the corruption I introduced - and if I check the status of the pool, it will warn me about it:
# zpool status backup
pool: backup
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: http://www.sun.com/msg/ZFS-8000-9P
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
backup ONLINE 00 0
raidz1 ONLINE 00 0
ad6 ONLINE 00 0
ad8 ONLINE 00 1
ad10 ONLINE 00 0
ad12 ONLINE 00 0
ad14 ONLINE 00 0
errors: No known data errors
That is pretty freaking awesome.
One nasty caveat: due to an inadvisable configuration line in /etc/devd.conf, by default ZFS errors will not be logged in any system log - if you haven't specifically changed logging configuration on your machine, ZFS errors end up going to user.warn which, again by default, effectively means going to /dev/null. See here for more info on the logging SNAFU.