My comment was in relation to Geovision software only based on scalability information direct from Geovision.
This video is where I got my info from and it explains everything. A bit further on from where I linked it also compares Geovision with Milestone GPU decoding.https://youtu.be/d74yR-e0vFc?t=1523
Nice video. It mentions in the video for maximum performance you'll want to use both Intel and Nvidia GPU's. Noticed some of the settings they are using VBR instead of CBR. Still seems overly optimistic for what you get in the real world however. You can always use more GPU power for these streams. A single 4K stream can sometimes bring a system to its knees if it's not new and powerful enough. Then when you add multiple monitors and especially multiple 4k monitors viewing multiple 4k streams, you really start to see things bogging down. Currently I have to use both Intel and Nvidia GPU's. Before I switched from GV-VMS 15 to Xprotect 2016 r2, I was seeing some high CPU and GPU with GV-VMS. I did not notice the large differences in GPU decoding that they say can be expected.
I was getting pretty much the same consistent cpu and gpu utilization from both VMS's. But with Geo, it was hard to determine the actual framerates because as far as I know they can only be displayed on the web client. With Xprotect there is a realtime option to display fps of each stream on the main system.
Good to see Geovision is still trying to compete. They actually beat Xprotect with Nvidia GPU decoding which was nice. Still waiting for Xprotect to add it to their product, but for all other aspects, Xprotect is miles ahead of Geo unfortunately. I used Geovision for 15 years but due to the recent debacle in the way they handled upgrades of GV-NVR to GV-VMS, I finally decided to move away from Geo and move over to Milestone. Much more flexibility with the Xprotect products.