Announcement

Collapse
No announcement yet.

Unigine Superposition Is A Beautiful Way To Stress Your GPU In 2017, 17-Way Graphics Card Comparison

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by linuxjacques View Post
    Is there some law that says they can't post a hash of the files so I can at least verify I got a good download?
    sha1sum: a98e3a7372ddf7dd6d95f8ce8e55e15361de415b

    Comment


    • #42
      Originally posted by Licaon View Post
      sha1sum: a98e3a7372ddf7dd6d95f8ce8e55e15361de415b
      Thanks!

      I did eventually get a good download. I can run the benchmark now. Only strange thing is about once per second it jerks (when framerate is less than monitor).

      Comment


      • #43
        Originally posted by debianxfce View Post

        Works fine with:
        You did not bother reading anything else, right?

        Comment


        • #44
          Dear Michael ,

          There's something I don't understand:
          from your results it suggests that: Ultra < Extreme (most demanding);
          but when I compare Ultra vs Extreme on my machine (Intel Skylake) it appears Ultra is much more demanding than Extreme (about 4x).

          My observations seem to be confirmed by looking at the generated "~/.Superposition/automation/log-*.txt" files:
          • Ultra:

            "pts/unigine-super-1.0.0":
            Code:
            	        <Entry>
            	          <Name>Ultra</Name>
            	          <Value>-shaders_quality 3 -textures_quality 2</Value>
            	          <Message></Message>
            	        </Entry>
            "~/.Superposition/automation/log-<ultra>.txt":
            Code:
            	Settings:
            	    Render: OpenGL
            	    Fullscreen: normal
            	    App resolution: 1920x1080
            	    Render resolution: 1920x1080
            	    Shaders: Extreme
            	    Textures: high
            	    SSRT: enabled
            	    SSAO: enabled
            	    SSGI: enabled
            	    Parallax: enabled
            	    Refraction: enabled
            	    Motion blur: enabled
            	    DOF: enabled
          • Extreme:

            "pts/unigine-super-1.0.0":
            Code:
            	        <Entry>
            	          <Name>Extreme</Name>
            	          <Value>-shaders_quality 4 -textures_quality 2</Value>
            	          <Message></Message>
            	        </Entry>
            "~/.Superposition/automation/log-<extreme>.txt":
            Code:
            	Settings:
            	    Render: OpenGL
            	    Fullscreen: normal
            	    App resolution: 1920x1080
            	    Render resolution: 1920x1080
            	    Shaders: 4K Optimized
            	    Textures: high
            	    SSRT: enabled
            	    SSAO: enabled
            	    SSGI: disabled
            	    Parallax: enabled
            	    Refraction: disabled
            	    Motion blur: enabled
            	    DOF: enabled

          From what I've understood "4K Optimized" shaders are a lot lighter than the "Extreme" shaders.

          I'm using Unigine Superposition 1.0.
          Any thoughts?

          Comment


          • #45
            I run Ubuntu 17.04 with padoka ppa on my rx 480. I have severe issues with lighting in the benchmark. can someone verify that?

            Comment


            • #46
              Originally posted by L_A_G View Post
              Seeing how I'm able to get a constant 97-100% GPU utilization regardless of settings I doubt that there's much overhead that actually gets in the way of keeping the GPU completely occupied. Specially when the benchmark never seems to go much above 20% CPU utilization on an R7 1700 with SMT off. During the development of the more recent versions of OpenGL they put a lot of effort into CPU overhead reduction under the banner of "AZDO" or "Almost Zero Driver Overhead". Seeing how this actually requires OpenGL 4.5 I wouldn't be the least bit surprised if many of these features are used.
              Vulkan is more than just solving the CPU-bottleneck, its also about making the GPU rendering more efficient. So 100% GPU load on OpenGL is different from 100% GPU load on Vulkan. While solving certain bottlenecks around the topic of draw calls is important and was the main factor of motivation to create Vulkan in the first place, the rendering process itself also gets a boost, if properly optimized. Nevertheless, its nice to see OpenGL improvements - but its only half of the story.

              There is a reason why AMD got up to 70% more performance under (Windows) Vulkan for Doom. That wasn't just the CPU side.

              Comment


              • #47
                Originally posted by Shevchen View Post
                Vulkan is more than just solving the CPU-bottleneck, its also about making the GPU rendering more efficient. So 100% GPU load on OpenGL is different from 100% GPU load on Vulkan. While solving certain bottlenecks around the topic of draw calls is important and was the main factor of motivation to create Vulkan in the first place, the rendering process itself also gets a boost, if properly optimized. Nevertheless, its nice to see OpenGL improvements - but its only half of the story.
                Applications, both CPU and GPU parts, obviously get more control under Vulkan when they have to take over much of the work drivers used to take care of. However we've seen this doesn't necessarily lead to an actual performance gain as applications may not manage their new duties as well as the drivers that used to take care of them. Because of this performance right now more based on the quality of the implementation than what API is being used.

                There is a reason why AMD got up to 70% more performance under (Windows) Vulkan for Doom. That wasn't just the CPU side.
                AMD saw some pretty nice updates in Doom with Vulkan, but you should remember that their OpenGL drivers have a reputation for being pretty damn crummy. Nvidia, who is known for having considerably less crummy OpenGL drivers, saw a considerably smaller bump to performance. Try to remember that Vulkan was built on Mantle, AMD's own API that they made a rather decent implementation of, so AMD could re-use most part of a good driver for their Vulkan driver.

                Comment


                • #48
                  Originally posted by L_A_G View Post

                  Applications, both CPU and GPU parts, obviously get more control under Vulkan when they have to take over much of the work drivers used to take care of. However we've seen this doesn't necessarily lead to an actual performance gain as applications may not manage their new duties as well as the drivers that used to take care of them. Because of this performance right now more based on the quality of the implementation than what API is being used.
                  Correct - and in case of this particular benchmark, I'd like to see an implementation "well done". Right now, this is not the case and might be worth a couple of Dev-hours in order to have a nice reference point. The problem with Vulkan right now is, that we only have very few examples to refer to (like Doom and a couple of demos) - nothing really much to validate on.
                  Originally posted by L_A_G View Post
                  AMD saw some pretty nice updates in Doom with Vulkan, but you should remember that their OpenGL drivers have a reputation for being pretty damn crummy. Nvidia, who is known for having considerably less crummy OpenGL drivers, saw a considerably smaller bump to performance. Try to remember that Vulkan was built on Mantle, AMD's own API that they made a rather decent implementation of, so AMD could re-use most part of a good driver for their Vulkan driver.
                  I'm not sure about your view about cause and effect here. Nvidia has a much less restrictive rendering handling than AMD. For Nvidia, its okay to not have clean code, it eats it anyway and tries its best to render. AMD cards throw you a warning, an error or give you the dirty result you coded. I've seen on quite some Dev posts that they wasted several hours finding the tiny little variable they had to include, in order to get AMD cards running under the same code, while Nvidia cards took it just like that.

                  But here is the thing: Thats not AMDs fault. In fact, its a good lesson for the Devs to learn how to code clean. Its a pain in the ass, but a good one.

                  To take a more drastic example:

                  code:

                  b=1;
                  c=2;
                  a = b + c;
                  sprintf(a)

                  Look okay, right? In this case, AMD cards would ask you "How the heck is a, b and c defined? Is it an int, a double, a char or a potato?" And AMD is right here. Nvidia goes more in the line of "looks like a number, probably an int, what could possibly go wrong? *calculates*"

                  Now to the consequence:
                  To write fast and clean code, Devs have to dig through the entire thing again (which is only done in new titles or in projects that have a long term funding) in order to get the best performance out of it. They may even have to re-write the core engine and expand it to take advantage of the 300 new possible techs that Vulkan gives them. And this has to be done fresh - maybe you get a documentation, maybe a book, maybe some demos - but in the end, all the "old stuff" has to be replaced.

                  I also don't think Mantle is heavily favoring AMD here. Its just that Nvidia heavily optimized for DX and hit the wall with Vulkan, as they steamlined their Arch for DX.
                  Last edited by Shevchen; 26 April 2017, 05:10 AM.

                  Comment


                  • #49
                    Originally posted by Shevchen View Post
                    Correct - and in case of this particular benchmark, I'd like to see an implementation "well done". Right now, this is not the case and might be worth a couple of Dev-hours in order to have a nice reference point. The problem with Vulkan right now is, that we only have very few examples to refer to (like Doom and a couple of demos) - nothing really much to validate on.
                    If DX12, which is very similar in many crucial regards and of which there's many more existing implementations of, is anything to go by then the new APIs really do need to be well implemented and developers can no longer coast on good work from driver developers.

                    ...
                    There's no need to try to impress me with that explanation as I suspect I'm more familiar with the subject than you are based on your simplistic explanation. Nvidia's drivers aren't just better at coping with badly written code, they also run well written code considerably better. When AMD's drivers will assume that an object that's been declared as static will not be modified and adjust it's behavior accordingly, Nvidia's drivers will make no such assumption and will thus work fine when developers do mindbogglingly stupid things like that.

                    AMD's approach is a more altruistic one while Nvidia's approach is a more pragmatic approach. A pragmantic vs altruistic approach to things permeates trough pretty much everything Nvidia and AMD do. AMD went in heavy and early on low level APIs while Nvidia focused on engineering around the limitations of old high level ones. AMD went for unified shaders early while Nvidia stayed with the more traditional model for longer. AMD went for a much higher level of parallelism while Nvidia focused more on per-thread performance.

                    I also don't think Mantle is heavily favoring AMD here. Its just that Nvidia heavily optimized for DX and hit the wall with Vulkan, as they steamlined their Arch for DX.
                    I'm pretty sure AMD can re-use much of their Mantle driver when Vulkan is so heavily based on it that many functions are exactly the same except they begin with "vk" rather than "Mantle".

                    Comment


                    • #50
                      Hey, I don't try to impress you here, I just try to express my opinion of a rather specific difference in very few words.

                      Now, for the DX12 titles, there isn't a single title out there performing well on it, on Vulkan we have exactly one. One might now argue, that on Windows "Rise of the Tomb Raider" with DX12 has excellent Crossfire support (some Tech reviewers like Adored TV even recommend it if you search for a cheaper solution compared to Nvidia for more FPS), but this is only the case, because RofTR is a pretty badly coded and two GPUs can compensate for that. Nvidia in this regard doesn't scale as well and the bigger problem: This is all void for Linux.

                      I'm trying to evaluate if Vega (once it comes out) it worth the money I might put into it and thus will search for benchmarks that give me an educated output about how this GPU performs. Now, I have a very specific need for my next GPU (It must run good on Vulkan, cause it shall run Star Citizen in the future) and as I plan to upgrade my monitor too (as the one I have now is a cheap-solution as I bought it when my older one just died) the GPU shall support stuff like HDR, which pretty much screams Freesync 2.

                      Is there a benchmark out there where I can get this kind of educated guess - beside looking at Doom and hope for the best? Thats why I hoped for Unigine Superposition to have such a good Vulkan implementation running on Linux to finally have a valid datapoint.

                      Comment

                      Working...
                      X