Jump to content
nDAlk90

What RAID do you use? Do you use Raid?

Recommended Posts


We started off using Enhance Technology RAIDs with 1TB drives in RAID5 (because we needed as much of that space as possible)... then we switched to QNAP arrays because the customer needed lower cost, and started using RAID6 when 2TB drives became more cost-effective... then back to Enhance when their price dropped and we had some issues with crappy support from QNAP... and now the last site we used a Promise array - 16-bay, 3U unit, initially loaded with only eight disks but leaving room for expansion. All of them now are running RAID6 + hotspare.

Share this post


Link to post
Share on other sites

All RAID6 for me. All machines have 11 or 12 drives, using 500gb, 1tb, and 2tb (depending on how old the box is). Older machines have Areca controllers (which I LOVE), newer ones have LSI controllers (GUI is clunky, but they work well).

 

After 4 years, I finally had my first double-failure in an array last month during a rebuild. So glad I was using RAID6 and not 5!

 

Also, I use all Seagate ES.2 enterprise drives.

 

My home box uses an LSI controller and 5x1tb consumer drives in RAID5. I'd hate to lose the data, but it's not the end of the world if I do. At work, however, loss of data is a fineable offense...

Share this post


Link to post
Share on other sites
@Soundy What RAID card did you use with Promise unit?

None - we're using iSCSI. VessRAID 1840i, specifically. The QNAPs and Enhance arrays were all using iSCSI as well.

 

Dustmop: not using any hotspares? I find it so handy, if a drive fails, the unit just automatically removes the bad drive, adds the spare into the array, starts rebuilding, and then shoots me a warning email, so I can come out when it's next convenient to replace the failed drive and then designate the new one as the hotspare.

Share this post


Link to post
Share on other sites
Dustmop: not using any hotspares? I find it so handy, if a drive fails, the unit just automatically removes the bad drive, adds the spare into the array, starts rebuilding, and then shoots me a warning email, so I can come out when it's next convenient to replace the failed drive and then designate the new one as the hotspare.

 

In the machines with Areca controllers, there is a hotspare (LSI is a pain to set hotspares in their lousy GUI). That's how I got the double-failure. About halfway through the automatic rebuild, a second drive had an "unrecoverable error" in the array.

 

Nice thing with Areca is that once a hotspare is used, after you pull the bad drive, the new one you put in is automatically made the hotspare. Now THAT is handy! LSI requires user interaction for almost all steps after a drive failure.

Share this post


Link to post
Share on other sites
@Soundy how much data are you throwing at these iSCSI units?

Looking at one site right now, it has 20 analog cameras, three Arecont AV3155DNs (currently in B&W), and one 3xLogic VSX-2MP-VD... it's in a busy restaurant and it's dinner time right now, so it's probably seeing maximum activity... I'm showing an average of around 15Mbps from the DVR to the array, with a few peaks at 25Mbps.

 

Just checking another site, with 28 analog cameras and three VSX-2MP-VDs, I'm showing a fairly steady 18-19Mbps, DVR-to-RAID (it's a smoother graph because the VSX cams *I think* are CBR, while the Areconts are VBR).

Share this post


Link to post
Share on other sites
Nice thing with Areca is that once a hotspare is used, after you pull the bad drive, the new one you put in is automatically made the hotspare. Now THAT is handy!
Many RAIDs do that. I can confirm that Arena Maxtronic, Infortrend, Huawei, Nexsan and many others have that capability embedded.

 

I would never recommend RAID5 for critical, or even most, security applications. It's just too common to lose more than one drive at a time. The thing is, constant video recording doesn't leave the drives any time to do their internal self-tests. Couple that with systems' inability to verify-after-write and you're pretty much guaranteed to lose data at one time or another with RAID5.

 

I've encountered a number of instances where drives had problems reading the data they thought they were writing. In many applications, the normal verify process would catch that and trigger the RAID to kick out the drive. Without verify, the RAID chugs merrily along thinking all is well until another a recognized valid drive failure occurs. During rebuild, the RAID tries to read parity data from the first drive and voila! Two failed drives = Data loss.

 

RAID6 avoids that by allowing two drives to fail without data loss. Can RAID6 systems lose data? Of course! But the likelihood of three "simultaneous" drive failures is far lower than two.

Share this post


Link to post
Share on other sites

We have been using RAID 10 on QNAP devices for about 15 months now with GREAT success. It is a good way to squeeze a bit more out of a low horsepower NAS if you need to, plus it saves a lot of writing when you take away some of that parity overhead.

Share this post


Link to post
Share on other sites

I am going to try RAID 6 on our next install. After reading this chart, I agree that seems like the best option. The current RAID 10 setups are being used for recording and accessing recording from a 65 camera Mobotix setup. We are running 16 disks in each one of them (2TB WD EURS disks), so there is a fair bit of wiggle room for failures.

 

They were originally running RAID 5, which was killing the system. I could never review video and the CPU load on the NAS was over 90% all the time. WHen we moved to RAID 10, it was like a magic wand was waved over system... it was beautiful!

 

EDIT: I should probably explain this a bit more. There are 2 QNAP TS-459 units and 2 TS-859 units in this setup. Each one was added at a different time, and the folks that did it had no real plan. When I took it over we aggregated everything to make them all essentially one big unit. Because the CPU's on these were not exactly up to the task, that is why we chose RAID 10 over RAID 6. If I had a new project with things spec'd correctly, RAID 6 would surely be my first choice.

Share this post


Link to post
Share on other sites

If they had trouble handling RAID5, they would've barfed on RAID6. I typically like to allow at least 50% data rate overhead on a system. That helps ensure the capability to operate normally during rebuilds.

Share this post


Link to post
Share on other sites

i am currently looking into the various RAID options as i have decided to build a NAS.

 

currently, i'm thinking of going with RAID 10

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×