Lurked for a while, but for some reason this thread made me register so I could
reply. (sorry in advance for the length of post)
/soap box::on/
RAID only increases availability of data on the array. Raid is not a replacement for a backup scheme.
/soap box::off/
veloce851, gotta ask - where are you buying these drives from, and how are they being shipped? that is an abnormal lot of drive failures for one user with that few drives. I would keep RMA'ing drives that don't sound right.
I have an Areca 1130 hosting a 10 drive raid5 and a 3ware 9650SE with only 4 drive raid10 on my two most used home machines.
There is no really awesome way to handle TBs of data for backup as a home user on the cheap. even harder when the data isn't static.
with all of that said, with 4-5TB of data that you want to archive you need to figure out something that will work for you.
if the data is static and your looking to drop most of it off in a safety deposit box at a bank or something I would give tape a serious look. (good buys on used but still very high quality enterprise stuff can be had) and tape is good for to archive on for years, where as HDDs 10 years later after never being powered on..... not so much
If your data is a little more dynamic or you want faster access than going to the safe box I would start looking at a SOHO NAS if you want the simple route (although not the cheapest)
as for drive makers, I have owned/used plenty from all the majors. All will fail given enough use and time. I have had the best luck with ones that were shipped properly. (aka not newegg oem drives shipped free floating in a box of foam peanuts.) I do however usually try and stick with WD or seagate for my home stuff.
there are a lot of things to keep in mind if you do go with a real hardware raid card setup.
-with a good number of disks in a raid setup, if one starts to fail another is probably soon to follow. this is how many people loose all their data on their raid - another drive fails/hickups during the array rebuild and presto everything is gone. hot spares help offset this some as the rebuild can start sooner, but with some cards and enough drives a rebulid can take many many hours (if not days) and your entire raid volume is probably nakked during this time *(more fault tolerant raid levels not with standing) .
-try hard to not mix and match drives, frimware versions, ...etc on a raid volume.
-don't use desktop edition drives with real raid cards. use raid edition ones (you want TLER/ERC/CCTL with a real raid card and SATA drives) .
I think that covers the basics, would love to hear what route you go with.