Announcement

Collapse
No announcement yet.

AMDGPU/RadeonSI Linux 4.10 + Mesa 17.1-dev vs. NVIDIA 378.09 Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by efikkan View Post
    Nvidia has much better support for open standards, which actually count.
    Ahahahahahahahahah
    Ahahahahahahahahahahahah
    Ahahahahahahahahahahahahahahahahahah
    Ahahahahahahahahahahahahahahahahahahahahahahahahah
    Ahahahahahahahahahahahahahahahahahahahahahahahahah ahahahahahaah

    Yeah, like OpenCuda for example. Ops.
    ## VGA ##
    AMD: X1950XTX, HD3870, HD5870
    Intel: GMA45, HD3000 (Core i5 2500K)

    Comment


    • #22
      I always wonder how CPU usage compares in tests where AMD/Mesa is totally mutilated by even the weakest nvidia card.

      Comment


      • #23
        Originally posted by artivision View Post

        Someone should tell Nvidia that if GTX1080 gives 156fps on Unigine, then the half part GTX1060 should give 78fps and not 100. Nvidia is not to be trusted as always.
        Someone should teach you how a video card works (or workloads in general), but that someone is not me.

        Comment


        • #24
          The gap is still rather high, but its not as bad as when I buyed my 7970. The OS-drivers are stable and fast enough to play any games I'm interested in and the ones that interest me most need Gallium-nine support, which Nvidia simply doesn't have. Its a personal choice - like always, but AMD has no dealbreakers at all for me.

          Comment


          • #25
            Considering NVIDIA closed drivers have a shader cache and other features the open drivers still don't have (or can't have, like game-specific profiles), the results for RadeonSI don't look that bad (inferior, but perfectly playable in most cases). I wonder how it will perform once that code lands.

            Comment


            • #26
              Originally posted by Aeder View Post
              Considering NVIDIA closed drivers have a shader cache and other features the open drivers still don't have (or can't have, like game-specific profiles), the results for RadeonSI don't look that bad (inferior, but perfectly playable in most cases). I wonder how it will perform once that code lands.
              FYI: Shader cache is about faster load time, not faster rendering.

              Comment


              • #27
                Originally posted by efikkan View Post
                FYI: Shader cache is about faster load time, not faster rendering.
                That's only true if the game preloads shaders, not when they are loaded/compiled on the fly.

                Comment


                • #28
                  Originally posted by geearf View Post
                  That's only true if the game preloads shaders, not when they are loaded/compiled on the fly.
                  What?
                  No, the driver uses a shader cache to store compiled shaders. OpenGL is designed to compile GLSL every time. Using a shader cache only improves loading time (either in startup or loading of new levels ingame), no game loads and compiles shader programs per frame! (that would be the most stupid developer ever)

                  Comment


                  • #29
                    Originally posted by Qaridarium

                    I bought a AMD rx470 today and I use the open source drivers.

                    so people with a brain do not trust the "Nvidia" hype because they know what a Corporatism/Fascism Monopole company this is.
                    It's not hype, it's just basic math. Nvidia gets more FPS in pretty much every scenario. There's no question about it that they are anti-consumer and anti-OSS, but their stuff works, and it works a lot better than AMD's. I have a 390X and it works wonderfully in Windows. In Linux, it's hot garbage.

                    Comment


                    • #30
                      Originally posted by efikkan View Post
                      Using a shader cache only improves loading time (either in startup or loading of new levels ingame), no game loads and compiles shader programs per frame!
                      As far as I could follow the discussions at freedesktop.org the scenario you're describing is currently the main issue with games based on Unreal Engine.

                      For those who still find the results disappointing:
                      1. You don't have to be afraid of any issues concerning the seamless integration of open source drivers in any distribution in opposite to proprietary drivers.
                      2. You don't have to be afraid of a whole black screen when you updated your kernel.
                      3. It's unlikely that the open source drivers, being openly tested several months, introduce new bugs in the stable release. Just take a look what new updates do on Windows machines! I never had any driver before that worked as reliably as AMDGPU! And RadeonSI does lack some OpenGL features but all this is going to be fixed in clean code that everyone can improve. At first it's more important to get to OpenGL 4.5 version string than improving the performance further and therefore just two features are currently missing.

                      4. Have fun reporting Nvidia your bugs!! - If you find some in RadeonSI you may even write your own fix that will likely also improve Nouveau and i965 at the same time!
                      5. Your rendering quality won't be corrupted to achieve better performance. If so, it will be exposed in the open sources. Do you know what tricks Nvidia implemented in their closed drivers?
                      6. I usually see tests running on 1080p @ Ultra and not 4K, or 4K as well but only on low settings here on Phoronix. There are also often CPU limits in these benchmarks that start at a low level like GTX 1050. So AMD GPUs are generally slower when you don't really stress them and it's obvious that none of these go above the Nvidia GPUs in the CPU limit. Everything higher than a GTX 1050 is not necessary for the specific game so it doesn't make any sense to judge a Fury X being 10-20% slower but also a GTX 1080 being only 10% faster...

                      It's harder to stress all the shaders in AMD GPUs in my opinion. The only Linux game that's able to do it well is Deus Ex: Mankind Divided and it took enough time to make the drivers ready to support it properly. With maxed out settings instead of high settings I wouldn't expect any big performance decreases. The most companies also develop for Linux just aside their main target platforms. So they won't extremely optimize the games - all that given just leads to the conclusion that you shouldn't judge cards being made for high loads with games that could easily run on the Intel IGP.


                      Finally, with my RX 480 I can play every game except Unreal Engine based games and Shadow of Mordor(which is not an issue of Mesa) not just on Ultra but even maxed out, in 1440p with more than 30 FPS average, but mostly with 60-100FPS.
                      For most of the Games, decreasing the Resolution or decreasing any settings that relieve the GPU I don't get significantly more FPS!
                      So if you buy a GPU for low end gaming and old screens you are limited because parts of the GPUs compute units are idle and the lover GPU clock does its slower work than the faster clock on current Nvidia cards...

                      If you go for high resolutions and maximum details or just don't like to run in issues with proprietary stuff, don't need all the strain with proprietary bugs, hoping them to fix them in two months etc. there is enough reason to prefer RadeonSI, especially when Vega arrives with higher clocks that pretty likely gets closer to the performance of Nvidia cards in CPU limitations and low usage.
                      Last edited by oooverclocker; 27 January 2017, 02:58 PM.

                      Comment

                      Working...
                      X