Jump to content
armoreltech

Near-Line SAS vs. SATA Drives For Video Mgmt Server

Recommended Posts

Working on specs for a new Dell PowerEdge R720xd to run ExacqVision Pro and had a question about hard drives. Dell offers two types of 7,200rpm 3TB drives, a 3Gbps SATA drive and a 6Gbps Near-Line SAS drive. I've always gone with SAS for servers in general, but the vendor I'm purchasing cameras, software, and maybe the server from is recommending the SATA drives as better for video. No specific reason given as to why. Looking at 11 of these drives in a Raid 5 array (with a 12th for hot spare) connected to a PERC H710P raid controller. All drives will be internal to the server, and video is all the server will be doing. Will also have two SSDs in a RAID 1 to house the OS as recommended by Exacq. Will be installing 76 cameras, 45 ACTi E62s (3mp) for indoors and 23 ACTi E33s (5mp) and 8 ACTi E86s (3mp) for outdoors. All will be H.264 compression and plan to record continuously for about 9 hours on weekdays (motion based on nights and weekends). So, thoughts on which drives to go with? Thanks in advance.

Share this post


Link to post
Share on other sites

In my testing, SAS is faster in general, but the rates you'll likely be recording, SATA is fine. I'm not that familiar with Exacq, but what I do in Milestone is specify what drive a camera writes to, for example, if I had 20 cameras and 4 drives, I would have 5 camera write to one drive and make sure that related cameras that may trigger simultaneously write to different drives. Also, check out WD Purple SATA drives as they are designed for NVR use. Also, don't put too many drives on one controller as you'll saturated that quickly.

 

I would not use RAID 5 because it really slows down writes. I've been doing this for over 30 years and have spent lots of time in design at EMC's offices working on what configuration works best for different scenario. First consider that this is basically write only most of the time. The disks will get badly fragmented and RAID has to manage that across the LUN and it's not going to be pretty for writes as the heads will be bouncing around for space fragments. You probably setup servers where you do a lot of concurrent many reads, few writes like databases, but this is the opposite.

Share this post


Link to post
Share on other sites
I would not use RAID 5 because it really slows down writes.

 

That's strange because Avigilon sets all their servers up in RAID 5

 

And I have not had any problems with with the RAID 5 setups for video.

Share this post


Link to post
Share on other sites

Just because they do it, doesn't make right. The OP is talking an 11 drive RAID 5, that's just crazy and I bet would be on the order of 5-10x slower than JBOD. I bet if Avigilon is doing it, it's likely at worst a RAID 5 configuration with 4 drives, still slower than JBOD but probably acceptable for NVR use.

Share this post


Link to post
Share on other sites
We use Milestone Express that provides redundancy on it's own.

 

So your using a software solution instead of a hardware solution.

 

I would much rather pop out a drive and replace with another without loosing anything or even stopping services.

 

I would hate to have to say:

"Mr customer that armed robbery isn't recorded because we lost a hard drive."

 

 

Just because they do it, doesn't make right. The OP is talking an 11 drive RAID 5, that's just crazy and I bet would be on the order of 5-10x slower than JBOD.

 

Who are you to say it isn't right?

Share this post


Link to post
Share on other sites

I know this is an old post what drives did you end up selecting. I have 9 of the Dell R72XD's currently in place. 24 900GB 2.5in SAS drives two SSD's for the OS. The 24 drives is configured in a Raid 10. On my test machine I have 120 1080P camera running at 15IPS and I am not even stressing the unit yet. For larger systems and the price point it is difficult to find a better value.

Share this post


Link to post
Share on other sites

I have not found that using slow HDD has any effect on the recordings. Heck, I'm running a slow 5400RPM 2TB 2.5" drive with a dozen cameras and no problems and that includes the operating system.

Share this post


Link to post
Share on other sites

Nice set up we looked at a similar option for sever/ storage solution but for our needs it was not quite what we needed. Currently we are using the 9.8TB of formatted storage on on the server for live video only, and archive to an EMC Isilon unit. Currently we have about a half a PB of video storage. I will need to add a tray possible two if we keep adding on at our current rate.

Share this post


Link to post
Share on other sites

My small contribution here:

I worked a lot of years for CCTV manufacturers .

I assisted to the migration from AIT tapes, then PATA, then SAS HDD to SATA HDD actually.

This choice is due simply because the performance of the new SATA drives today, the costs and the cheapest controllers for SATA management.

 

So these choices are issued from long tests measurements with the provided servers available (Dell changed recently their strategy too going to SATA HDD even for high mass storage).

 

Regarding the storage RAID functionnality : Raid 5 is the minimum protection and a little "has been" also manufacturers are providing more and more Raid 6, due to lowest and lowest HDD pricing.

Just check the last Dell servers, you can reach in one unit 700 recording streams with a total of 600 TB, very simply for a centralized solution. And of couse having Mirroring unit or Failover or One server spare for a group of 15 Nvrs online. Technology permits that today very simply.

But at end: costs are there too.

 

Today, this is normal some will still push to SAS, as big stocks exists, but for how long?

CCTC IP manufacturer are always pro-active and in a confidential relation with servers IT manufacturer, so they are still working on future hardware and R&D evaluation for the next incoming years, and as the troughput grows from year to year on expotential way, doesn't surprise me to have more than 1000 video streams in a single server, event with 100% stress specially when rebuilding faulty array

 

So SAS will desapear in the next incoming years; remember what happened to simple SCSI models and PATA.

Today SATA 8TB Hdd are still there for public use. And at R&D offices capacity over 10TB exists yet under testing/evaluation/validation.

 

But these are just the facts...

 

(sorry for my poor English written)

Share this post


Link to post
Share on other sites

I have been building RAID ARRAY storage servers for 10 years using open source software. When you 11 drives in a RAID5 and use good quality SATA 7200RPM drives like WD BLACK, the difference in performance you see between a SAS and SATA is minimal. The more drives you have in your array the better. Besides the number of drives, the controller and software you use is important. For a controller, I recommend either a Areca or LSI card with plenty of cache ram and a battery backup. Do not use a MB integrated raid solution. For software, if you are building a SAN use openfiler. I've used it for VM machines without issues.

DO NOTS

- do not use a 5400rpm drive, it will get very slow. I have one for personal use with WD green 8 drives, and is so slow.

- do not forget to leave email notifications on in the raid controller. You want to know when a drive goes bad.

- do not use green drives in a SATA array

- dont forget to get a strong PSU if doing your own build

Share this post


Link to post
Share on other sites
I have been building RAID ARRAY storage servers for 10 years using open source software. When you 11 drives in a RAID5 and use good quality SATA 7200RPM drives like WD BLACK, the difference in performance you see between a SAS and SATA is minimal. The more drives you have in your array the better. Besides the number of drives, the controller and software you use is important. For a controller, I recommend either a Areca or LSI card with plenty of cache ram and a battery backup. Do not use a MB integrated raid solution. For software, if you are building a SAN use openfiler. I've used it for VM machines without issues.

DO NOTS

- do not use a 5400rpm drive, it will get very slow. I have one for personal use with WD green 8 drives, and is so slow.

- do not forget to leave email notifications on in the raid controller. You want to know when a drive goes bad.

- do not use green drives in a SATA array

- dont forget to get a strong PSU if doing your own build

 

"The more drives you have in your array the better"

 

Not always better. The more drives in the array the longer the rebuild times

Share this post


Link to post
Share on other sites

"The more drives you have in your array the better"

 

Not always better. The more drives in the array the longer the rebuild times

 

Did you consider the time it takes to recover a 4TB drive in a RAID5?

 

Depends on the number of drives, the size of the drive, and the type of array. I've had a raid 5 array of 15 drives have one drive go out on me where each drive is 2TB. It doesn't take a long time if you have a good controller. Will take at most a day. It will be the less than to recover a larger size drive of a smaller set.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×