Announcement
Collapse
No announcement yet.
Unigine Superposition Is A Beautiful Way To Stress Your GPU In 2017, 17-Way Graphics Card Comparison
Collapse
X
-
Dear Michael ,
There's something I don't understand:
from your results it suggests that: Ultra < Extreme (most demanding);
but when I compare Ultra vs Extreme on my machine (Intel Skylake) it appears Ultra is much more demanding than Extreme (about 4x).
My observations seem to be confirmed by looking at the generated "~/.Superposition/automation/log-*.txt" files:- Ultra:
"pts/unigine-super-1.0.0":
Code:<Entry> <Name>Ultra</Name> <Value>-shaders_quality 3 -textures_quality 2</Value> <Message></Message> </Entry>
Code:Settings: Render: OpenGL Fullscreen: normal App resolution: 1920x1080 Render resolution: 1920x1080 Shaders: Extreme Textures: high SSRT: enabled SSAO: enabled SSGI: enabled Parallax: enabled Refraction: enabled Motion blur: enabled DOF: enabled
- Extreme:
"pts/unigine-super-1.0.0":
Code:<Entry> <Name>Extreme</Name> <Value>-shaders_quality 4 -textures_quality 2</Value> <Message></Message> </Entry>
Code:Settings: Render: OpenGL Fullscreen: normal App resolution: 1920x1080 Render resolution: 1920x1080 Shaders: 4K Optimized Textures: high SSRT: enabled SSAO: enabled SSGI: disabled Parallax: enabled Refraction: disabled Motion blur: enabled DOF: enabled
From what I've understood "4K Optimized" shaders are a lot lighter than the "Extreme" shaders.
I'm using Unigine Superposition 1.0.
Any thoughts?
Comment
- Ultra:
-
Originally posted by L_A_G View PostSeeing how I'm able to get a constant 97-100% GPU utilization regardless of settings I doubt that there's much overhead that actually gets in the way of keeping the GPU completely occupied. Specially when the benchmark never seems to go much above 20% CPU utilization on an R7 1700 with SMT off. During the development of the more recent versions of OpenGL they put a lot of effort into CPU overhead reduction under the banner of "AZDO" or "Almost Zero Driver Overhead". Seeing how this actually requires OpenGL 4.5 I wouldn't be the least bit surprised if many of these features are used.
There is a reason why AMD got up to 70% more performance under (Windows) Vulkan for Doom. That wasn't just the CPU side.
Comment
-
Originally posted by Shevchen View PostVulkan is more than just solving the CPU-bottleneck, its also about making the GPU rendering more efficient. So 100% GPU load on OpenGL is different from 100% GPU load on Vulkan. While solving certain bottlenecks around the topic of draw calls is important and was the main factor of motivation to create Vulkan in the first place, the rendering process itself also gets a boost, if properly optimized. Nevertheless, its nice to see OpenGL improvements - but its only half of the story.
There is a reason why AMD got up to 70% more performance under (Windows) Vulkan for Doom. That wasn't just the CPU side.
Comment
-
Originally posted by L_A_G View Post
Applications, both CPU and GPU parts, obviously get more control under Vulkan when they have to take over much of the work drivers used to take care of. However we've seen this doesn't necessarily lead to an actual performance gain as applications may not manage their new duties as well as the drivers that used to take care of them. Because of this performance right now more based on the quality of the implementation than what API is being used.
Originally posted by L_A_G View PostAMD saw some pretty nice updates in Doom with Vulkan, but you should remember that their OpenGL drivers have a reputation for being pretty damn crummy. Nvidia, who is known for having considerably less crummy OpenGL drivers, saw a considerably smaller bump to performance. Try to remember that Vulkan was built on Mantle, AMD's own API that they made a rather decent implementation of, so AMD could re-use most part of a good driver for their Vulkan driver.
But here is the thing: Thats not AMDs fault. In fact, its a good lesson for the Devs to learn how to code clean. Its a pain in the ass, but a good one.
To take a more drastic example:
code:
b=1;
c=2;
a = b + c;
sprintf(a)
Look okay, right? In this case, AMD cards would ask you "How the heck is a, b and c defined? Is it an int, a double, a char or a potato?" And AMD is right here. Nvidia goes more in the line of "looks like a number, probably an int, what could possibly go wrong? *calculates*"
Now to the consequence:
To write fast and clean code, Devs have to dig through the entire thing again (which is only done in new titles or in projects that have a long term funding) in order to get the best performance out of it. They may even have to re-write the core engine and expand it to take advantage of the 300 new possible techs that Vulkan gives them. And this has to be done fresh - maybe you get a documentation, maybe a book, maybe some demos - but in the end, all the "old stuff" has to be replaced.
I also don't think Mantle is heavily favoring AMD here. Its just that Nvidia heavily optimized for DX and hit the wall with Vulkan, as they steamlined their Arch for DX.Last edited by Shevchen; 26 April 2017, 05:10 AM.
Comment
-
Originally posted by Shevchen View PostCorrect - and in case of this particular benchmark, I'd like to see an implementation "well done". Right now, this is not the case and might be worth a couple of Dev-hours in order to have a nice reference point. The problem with Vulkan right now is, that we only have very few examples to refer to (like Doom and a couple of demos) - nothing really much to validate on.
...
AMD's approach is a more altruistic one while Nvidia's approach is a more pragmatic approach. A pragmantic vs altruistic approach to things permeates trough pretty much everything Nvidia and AMD do. AMD went in heavy and early on low level APIs while Nvidia focused on engineering around the limitations of old high level ones. AMD went for unified shaders early while Nvidia stayed with the more traditional model for longer. AMD went for a much higher level of parallelism while Nvidia focused more on per-thread performance.
I also don't think Mantle is heavily favoring AMD here. Its just that Nvidia heavily optimized for DX and hit the wall with Vulkan, as they steamlined their Arch for DX.
Comment
-
Hey, I don't try to impress you here, I just try to express my opinion of a rather specific difference in very few words.
Now, for the DX12 titles, there isn't a single title out there performing well on it, on Vulkan we have exactly one. One might now argue, that on Windows "Rise of the Tomb Raider" with DX12 has excellent Crossfire support (some Tech reviewers like Adored TV even recommend it if you search for a cheaper solution compared to Nvidia for more FPS), but this is only the case, because RofTR is a pretty badly coded and two GPUs can compensate for that. Nvidia in this regard doesn't scale as well and the bigger problem: This is all void for Linux.
I'm trying to evaluate if Vega (once it comes out) it worth the money I might put into it and thus will search for benchmarks that give me an educated output about how this GPU performs. Now, I have a very specific need for my next GPU (It must run good on Vulkan, cause it shall run Star Citizen in the future) and as I plan to upgrade my monitor too (as the one I have now is a cheap-solution as I bought it when my older one just died) the GPU shall support stuff like HDR, which pretty much screams Freesync 2.
Is there a benchmark out there where I can get this kind of educated guess - beside looking at Doom and hope for the best? Thats why I hoped for Unigine Superposition to have such a good Vulkan implementation running on Linux to finally have a valid datapoint.
Comment
Comment