Jump to content
LittleScoobyMaster

Geovision GPU Decoding

Recommended Posts

Anyone know how to turn this on?

 

I can't find the settings for it.

 

Does the setting just not show up if your hardware isn't a Sandy Bridge chipset with onboard VGA?

 

Running 8.5.4.0 on Win7 64-bit with only 4 gigs of mem (specs say 8 recc, but not required).

 

Also mention Sandy Bridge with built in VGA, kinda fuzzy on if that is recc or required. (really need a Sandy for this? will they be adding older chipsets for this feature?)

 

Pavel, you got GPU Decoding working, didn't you? Did you have to check a setting?

 

It would suck the donkey-donkey if it was limited to Sandy's only.

 

 

 

GPU Decoding Specifications

In V8.5 or later, support for GPU (Graphics Processing Unit) decoding is added to lower the CPU

loading and to increase the total frame rate supported by a GV-System. GPU decoding only supports

the following software and hardware specifications:

 

Software Specifications:

 

Supported Operating Systems:

Windows Vista (32-bit) / 7 (32 / 64-bit) / Server 2008 R2 (64-bit)

 

Resolution: 1 M / 2 M

Codec: H.264

Stream: Single

Note: To apply GPU decoding, the recommended memory (RAM) requirements is 8 GB or more for 64-bit OS and 3 GB for 32-bit OS.

 

 

Hardware Specifications:

 

Motherboard Sandy Bridge chipset with onboard VGA

Ex: Intel® Q67, H67, H61, Q65, B65, Z68 Express Chipset.

Note: If you want to use an external VGA card, it is required to connect a monitor to the onboard VGA

to activate GPU decoding.

Share this post


Link to post
Share on other sites

I don't think they will add support for older chipsets since the new CPU-integrated graphics processors are so much faster, also the new GPUs support OpenCL programing language which I think is what is allowing Geo to tap into the gpu.

 

The bit in the user guide about connecting a monitor to the onboard vga when using an external card is a bit confusing, but the reason is if you install a external vga and do not connect a monitor to the onboard vga it will shut the onboard off, making it unavailable to the Geo software.(can use "dummy monitor"- connect 75-ohm resistors between onboard vga pins: 1-6, 2-7, and 3-8)

 

There is no setting to enable(unless i'm totally forgetting...).

Tried a celeron g530 in a h61 system with (5) 1.3M and (3) 2M cams- in 8ch live view with 1920x1080 monitor and cams set to single-stream h264 max resolution- Geo was using maybe 20% cpu.

Share this post


Link to post
Share on other sites
The bit in the user guide about connecting a monitor to the onboard vga when using an external card is a bit confusing, but the reason is if you install a external vga and do not connect a monitor to the onboard vga it will shut the onboard off, making it unavailable to the Geo software.(can use "dummy monitor"- connect 75-ohm resistors between onboard vga pins: 1-6, 2-7, and 3-8)

 

Glad I'm not the only one who was confused by that.

 

My current GV-NVR server is an older Intel Q6600 quad core based system with an AMD HD 3800 series video card running Win7.

 

I thought the decent GPU's alone could handle CUDA\OpenCL without running a newer chipset but I really don't recall. I just read that the HD3800 doesn't support OpenCL. I don't expect Geo to support the older chipsets\GPU's but I was curious if they would or not. Guess it's time to upgrade that old server.

 

I'm still unclear though, does the chipset really matter? Or is it just the GPU?

 

If only the GPU mattered, you'd think they would maybe support a few older chipsets, in which case I could just throw one of my Nvidia GTX570's into the Q6600 system and be ready to use GPU decoding. But they make it sound like you need the right combo of chipset and GPU, not just a certain GPU. (memory helps too - 16G I think it was - recc, not sure what the minimum is).

 

My current chipset is the Intel X38. The motherboard was made in the first part of 2008, so its a good 4 years old. So, you think the chances of Geo supporting the X38 is pretty slim? What would the odds be of them supporting it?

 

I'm also curious if anyone on this forum, with the proper spec'd server (Sandy\Ivy bridge, decent GPU, lots of memory) has done a side by side comparison of GV-NVR with GPU decoding off and on. What kind of CPU savings were experienced?

 

My current GV-NVR runs at about 40-50% CPU on all 4 cores with Live view of 3 cams on the 4x4 panel display. If I make the panels smaller, the CPU goes down by half.

 

It goes up substantially more than 50% when a client connects.

 

I haven't turned on OnDemand yet either. That should help a bit. One thing is certain. These Megapixel IP camera's in Live View can eat your CPU for breakfast, and GPU decoding is going to be a neccessity.

 

To quote Arnold:

 

8Tkkpfhd07s

Edited by Guest

Share this post


Link to post
Share on other sites
I'm still unclear though, does the chipset really matter? Or is it just the GPU?

 

Its the chipset/CPU. Has to be a Intel HD Graphics GPU/IGP(Integrated Graphics Processor- inside LGA1155 CPU).

 

LGA1155 motherboard must have on-board video output.(some boards do not have video ports- these won't work because the IGP will be disabled)

 

Also be aware that not all Sandy Bridge CPUs have integrated GPU- example the i5-2550k cpu does not- so it would not be useful for GPU Decoding feature.

 

I'm also curious if anyone on this forum, with the proper spec'd server (Sandy\Ivy bridge, decent GPU, lots of memory) has done a side by side comparison of GV-NVR with GPU decoding off and on. What kind of CPU savings were experienced?

 

I would be interested to see a side by side as well.

I will be putting an analog dvr together in the next week or so, if I get some time before installing GV800B card I will set it up as NVR and do some testing. I will measure CPU first while using Sandy Bridge IGP(Celeron G530), then swap in a HD6450 and disable the Intel IGP.

Share this post


Link to post
Share on other sites
I'm still unclear though, does the chipset really matter? Or is it just the GPU?

 

Its the chipset/CPU. Has to be a Intel HD Graphics GPU/IGP(Integrated Graphics Processor- inside LGA1155 CPU).

 

LGA1155 motherboard must have on-board video output.(some boards do not have video ports- these won't work because the IGP will be disabled)

 

Also be aware that not all Sandy Bridge CPUs have integrated GPU- example the i5-2550k cpu does not- so it would not be useful for GPU Decoding feature.

 

I'm also curious if anyone on this forum, with the proper spec'd server (Sandy\Ivy bridge, decent GPU, lots of memory) has done a side by side comparison of GV-NVR with GPU decoding off and on. What kind of CPU savings were experienced?

 

I would be interested to see a side by side as well.

I will be putting an analog dvr together in the next week or so, if I get some time before installing GV800B card I will set it up as NVR and do some testing. I will measure CPU first while using Sandy Bridge IGP(Celeron G530), then swap in a HD6450 and disable the Intel IGP.

 

That would be great. Can't wait to see the results.

 

I read the part about the integrated GPU requirment as well, and that boggles me. Why limit the feature to an onboard integrated GPU when something like the latest Nvidia\AMD card would do so much better?

Share this post


Link to post
Share on other sites

I agree- the external cards are definitely much more powerful.

 

Sandy Bridge must have been the easiest starting point, but I think this feature is going to become a requirement as resolutions climb higher and higher so I would imagine they will add support for other GPU platforms in the future. We'll just have to wait and see what the future brings

Share this post


Link to post
Share on other sites

Had some time to compare with and without gpu decoding- only had 4 cams but it still shows a noticeable difference.

 

GPU Decoding- sandy bridge celeron IGP: 5-15% CPU (10min. avg: 8.79%)

 

PCI-E Radeon HD4350 1GB, IGP disabled: 25-35% CPU (10min. avg: 28.48%)

 

CPU usage measured in full-screen mode with cams set to single-stream, max resolution, H264 codec. Geovision panel size/monitor resolution is 1600x1200, DirectDraw Overlay and De-interlace Render options selected.

 

System Specs:

GV-NVR v8.5.4

Celeron G530

2x 4GB DDR3-1066 RAM

P8H61-M LE/CSM motherboard

Win7 Pro x64- Aero, UAC disabled

 

Cams:

BX140DW - 1MP

VD122D - 1.3MP

VD120D - 1.3MP

MFD110 - 1.3MP

Share this post


Link to post
Share on other sites
Had some time to compare with and without gpu decoding- only had 4 cams but it still shows a noticeable difference.

 

GPU Decoding- sandy bridge celeron IGP: 5-15% CPU (10min. avg: 8.79%)

 

PCI-E Radeon HD4350 1GB, IGP disabled: 25-35% CPU (10min. avg: 28.48%)

 

CPU usage measured in full-screen mode with cams set to single-stream, max resolution, H264 codec. Geovision panel size/monitor resolution is 1600x1200, DirectDraw Overlay and De-interlace Render options selected.

 

System Specs:

GV-NVR v8.5.4

Celeron G530

2x 4GB DDR3-1066 RAM

P8H61-M LE/CSM motherboard

Win7 Pro x64- Aero, UAC disabled

 

Cams:

BX140DW - 1MP

VD122D - 1.3MP

VD120D - 1.3MP

MFD110 - 1.3MP

 

Thanks for posting this, I have been looking for actual numbers and glad to see someone took the time to post some!

Share this post


Link to post
Share on other sites
Had some time to compare with and without gpu decoding- only had 4 cams but it still shows a noticeable difference.

 

GPU Decoding- sandy bridge celeron IGP: 5-15% CPU (10min. avg: 8.79%)

 

PCI-E Radeon HD4350 1GB, IGP disabled: 25-35% CPU (10min. avg: 28.48%)

 

CPU usage measured in full-screen mode with cams set to single-stream, max resolution, H264 codec. Geovision panel size/monitor resolution is 1600x1200, DirectDraw Overlay and De-interlace Render options selected.

 

System Specs:

GV-NVR v8.5.4

Celeron G530

2x 4GB DDR3-1066 RAM

P8H61-M LE/CSM motherboard

Win7 Pro x64- Aero, UAC disabled

 

Cams:

BX140DW - 1MP

VD122D - 1.3MP

VD120D - 1.3MP

MFD110 - 1.3MP

 

How do you enable or disable this feature in GV-NVR?

 

Does the feature only show up if your machine meets all the requirements of GPU decoding?

 

Let's say you had a system with 2 video cards, can you tell Geo to use the GPU decoding on one of the cards but not the other?

 

From what I've been reading it seems to me that GPU decoding can only be used with the built in motherboard GPU's, is that still true?

Share this post


Link to post
Share on other sites

I read the part about the integrated GPU requirment as well, and that boggles me. Why limit the feature to an onboard integrated GPU when something like the latest Nvidia\AMD card would do so much better?

 

If my guess is correct, this is due to the limit that draws from Intel's Quick Sync.

Due to this behavior, I would guess the GPU acceleration mainly comes from Intel's Quick Sync (maybe a little help from OpenCL too).

The reason you see this limit is because Intel will only activate QuickSync hardware when integrated graphic is connected to a monitor output, when Intel's integrated graphic does not have monitor output connected then QuickSync hardware will not be activated. This behavior was seen when Intel first introduced QuickSync 3 years ago.

 

I heard that there are workaround that will activate QuickSync with discrete graphic cards. You may have to google for it.

Share this post


Link to post
Share on other sites

How do you enable or disable this feature in GV-NVR?

 

Does the feature only show up if your machine meets all the requirements of GPU decoding?

 

Let's say you had a system with 2 video cards, can you tell Geo to use the GPU decoding on one of the cards but not the other?

 

From what I've been reading it seems to me that GPU decoding can only be used with the built in motherboard GPU's, is that still true?

I would also like to see an icon indicating whether the acceleration is enabled or not. Unfortunately that wasn't available in UI.

So far, if my guess is correct, the GPU acceleration only works with Intel's integrated graphic with QuickSync.

You can check whether your Intel CPU have QuickSync here: http://ark.intel.com/search/advanced?QuickSyncVideo=true&MarketSegment=DT

Share this post


Link to post
Share on other sites

How do you enable or disable this feature in GV-NVR?

 

Does the feature only show up if your machine meets all the requirements of GPU decoding?

 

Let's say you had a system with 2 video cards, can you tell Geo to use the GPU decoding on one of the cards but not the other?

 

From what I've been reading it seems to me that GPU decoding can only be used with the built in motherboard GPU's, is that still true?

I would also like to see an icon indicating whether the acceleration is enabled or not. Unfortunately that wasn't available in UI.

So far, if my guess is correct, the GPU acceleration only works with Intel's integrated graphic with QuickSync.

You can check whether your Intel CPU have QuickSync here: http://ark.intel.com/search/advanced?QuickSyncVideo=true&MarketSegment=DT

 

Thanks stealth_lee.

 

Wow, that is quite a limitation. I mean, If I have a nice Nvidia GPU hanging around, it sounds like I can't use it with Geovision. That's a bummer. I wonder why they don't allow the GPU decoding to work with Nvidia or AMD GPU's?

Share this post


Link to post
Share on other sites

I would also like to see an icon indicating whether the acceleration is enabled or not. Unfortunately that wasn't available in UI.

So far, if my guess is correct, the GPU acceleration only works with Intel's integrated graphic with QuickSync.

You can check whether your Intel CPU have QuickSync here: http://ark.intel.com/search/advanced?QuickSyncVideo=true&MarketSegment=DT

 

Thanks stealth_lee.

 

Wow, that is quite a limitation. I mean, If I have a nice Nvidia GPU hanging around, it sounds like I can't use it with Geovision. That's a bummer. I wonder why they don't allow the GPU decoding to work with Nvidia or AMD GPU's?

 

Well, I guess that's how they choose to implement GPU decoding function at first.

I heard through the grapevine that they will be supporting nVidia GPUs in the future.

 

My guess would be their nVidia GPU decode performance won't match Intel GPU's QuickSync. Since Intel put a lot of effort into QuickSync and nVidia GPU have hardware limitation while doing computation (especially context switching...)

Share this post


Link to post
Share on other sites

Well, I guess that's how they choose to implement GPU decoding function at first.

I heard through the grapevine that they will be supporting nVidia GPUs in the future.

 

Your grapevine was half-right.

 

Looks like they added Nvidia cards GPU decoding for GV-VMS, but not for GV-NVR.

 

I hope they add it to GV-NVR because GV-VMS is an expensive upgrade from GV-NVR. It's cost prohibitive. (Third Party IP Cameras)

 

GV-VMS Version History

Version 15.10.1.0

Released date: 02/22/2016

 

New in Main System:

 

Support for Windows 10

Support for GV-ASManager / GV-LPR Plugin V4.3.5.0

Support for GV-Web Report V2.2.6.0

Support for GPU decoding with external NVIDIA graphics cards

Support for dual streaming of GV-Fisheye Camera

Support for H.265 codec

Support for sending alert notifications through SNMP protocol

Support for up to 20,000 client accounts in Authentication Server

New image orientation options for corridor scenes (supported GV-IP Cameras only)

Support for marking video bookmarks in live view

Connection log with Center V2 / Vital Sign Monitor / Failover Plugin registered in System Log

Support for compacting videos to key frame only when merging and exporting

Share this post


Link to post
Share on other sites
You need to be running more than 40 5MP cameras for nvidia decoding to be worthwhile.

 

In my experience this depends on the software you use quite a bit. I know with my particular system where I constantly monitor Nvidia GPU and Intel GPU loads with GPU meters, I would easily see a gain with Nvidia cards doing more of the work. With Milestone Xprotect and only 6 cams at 30 fps all 1080 or less streams, the Intel GPU averages 50% load while the display monitor averages 10% of the current Nvidia GPU (rendering only) and that's only on a 1080p TV set. If I were to double that number of cams, my Intel GPU (HD 530 on an Intel I7-6700k) would be close to 100%. It's not good to always run processors at maximum load. Now, if I were to upgrade any of these to 4K cameras at 3840x2160 resolutions, that could really take another bite out of this setup.

Share this post


Link to post
Share on other sites

My comment was in relation to Geovision software only based on scalability information direct from Geovision.

 

This video is where I got my info from and it explains everything. A bit further on from where I linked it also compares Geovision with Milestone GPU decoding.

 

Share this post


Link to post
Share on other sites
My comment was in relation to Geovision software only based on scalability information direct from Geovision.

 

This video is where I got my info from and it explains everything. A bit further on from where I linked it also compares Geovision with Milestone GPU decoding.

 

 

Nice video. It mentions in the video for maximum performance you'll want to use both Intel and Nvidia GPU's. Noticed some of the settings they are using VBR instead of CBR. Still seems overly optimistic for what you get in the real world however. You can always use more GPU power for these streams. A single 4K stream can sometimes bring a system to its knees if it's not new and powerful enough. Then when you add multiple monitors and especially multiple 4k monitors viewing multiple 4k streams, you really start to see things bogging down. Currently I have to use both Intel and Nvidia GPU's. Before I switched from GV-VMS 15 to Xprotect 2016 r2, I was seeing some high CPU and GPU with GV-VMS. I did not notice the large differences in GPU decoding that they say can be expected.

 

I was getting pretty much the same consistent cpu and gpu utilization from both VMS's. But with Geo, it was hard to determine the actual framerates because as far as I know they can only be displayed on the web client. With Xprotect there is a realtime option to display fps of each stream on the main system.

 

Good to see Geovision is still trying to compete. They actually beat Xprotect with Nvidia GPU decoding which was nice. Still waiting for Xprotect to add it to their product, but for all other aspects, Xprotect is miles ahead of Geo unfortunately. I used Geovision for 15 years but due to the recent debacle in the way they handled upgrades of GV-NVR to GV-VMS, I finally decided to move away from Geo and move over to Milestone. Much more flexibility with the Xprotect products.

Share this post


Link to post
Share on other sites

I only use GV-VMS because I only have Geovision cameras. I like not having to pay for software updates.

 

If I had to pay for software I would probably go with something else. However free support and free software updates is real nice and not something many people factor when speccing a system. I started my box on windows 7 and now running windows 10 anniversary update. Through each windows release (7 -> 8 -> 8.1 -> 10 -> 1511 -> 1607) I've upgraded from GV-NVR to GV-VMS for free.

 

75% of my cameras are fisheye too. Bit of a pain to license them with other products to get the same functionality, although maybe that is not the case now.

Share this post


Link to post
Share on other sites
I only use GV-VMS because I only have Geovision cameras. I like not having to pay for software updates.

 

If I had to pay for software I would probably go with something else. However free support and free software updates is real nice and not something many people factor when speccing a system. I started my box on windows 7 and now running windows 10 anniversary update. Through each windows release (7 -> 8 -> 8.1 -> 10 -> 1511 -> 1607) I've upgraded from GV-NVR to GV-VMS for free.

 

75% of my cameras are fisheye too. Bit of a pain to license them with other products to get the same functionality, although maybe that is not the case now.

 

Yeah, GV-VMS is free for Geo cameras. If you only have Geo cams, it's the best solution hands down.

 

I only use non-Geo cameras and for that you have to purchase the GV-VMS 3rd Party camera licenses and that is where the problem lies because of the way Geo handles the upgrades from GV-NVR to GV-VMS for 3rd Party Cameras.

 

I used to own Geo hardware (GV-650 capture card) so I started using Geo software. Then I migrated to GV-NVR for IP cameras.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×