Announcement

Collapse
No announcement yet.

Radeon Windows 10 vs. Linux RadeonSI/RADV Gaming Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by schmidtbag View Post
    Wow, I was not expecting the performance to be so close in so many tests. radeonSI obviously has much room for performance improvements, but it is already very competitive against Windows. Very exciting.
    Also, comparing these results with the NVIDIA ones (where the driver should be mostly as fast as the Windows one, and yet there are similar differences between the windows and linux results), it seems that most of the problems come from bad ports rather than deficiencies in mesa.

    Comment


    • #12
      Well we need more minimal FPS charts, because if is Windows performance average 100FPS and Linux 60fps its is still good, but not if minimal FPS value is under 30 FPS => games is stuttering.

      Comment


      • #13
        Average 53 fps in Deus Ex on Fury 1080p High is certainly playable. :-) It's not AWESOME, but very much playable.

        Comment


        • #14
          Originally posted by tomtomme View Post
          what are you talking about? The Talos Principle?
          No, about Windows driver versions and internal versions Michael runs 17.1.2 here (which has base of like let say utter stable mesa branch), while 17.2.1 is released 2 days ago that is more current branch.

          But i got it, better to not mention these Windows versions to avoid confusion, because these currently have near the same version numbers like current mesa - they do second number by month while mesa second is going to be by quarter.

          But i could talk about Talos Principle also Talos DX11 renderer is top notch optimized, even whitelisted for everything in blob driver which means it total align to the DX11 and standard of AMD's blob win driver... so no one really could beat that. Maybe if someone makes 70% of that on GL driver with whatever tweaks he should be really more and much more than happy.

          And GL renderer is profiled for blob drivers on both WIndows and Linux, making something like 40% difference on low/est.... It is mostly just CPU bound difference there, so some iteration (not current) of marek's threaded GL might help for that.
          Last edited by dungeon; 15 February 2017, 02:46 PM.

          Comment


          • #15
            @Michael: I don't know have you tried it, but Quake Epsilon mod (http://www.moddb.com/mods/quake-epsilon-build), a variation of DarkPlaces engine, on Ultra settings is quite demanding, and is fully scriptable, if i'm not wrong. Maybe it would be interesting to include it in some further testing.

            Comment


            • #16
              Michael, about that Civilisation VI benchmark, isn't 8x MSAA at 4k resolution a bit too much? I use a 24" 1080p monitor and never go beyond 4X MSAA because I see no benefit on it. In some games I use even less because FXAA is enough. Maybe at less MSAA settings the game can become more playable in Linux?

              Anyway, thanks for the big test and congratulations to all the ones that one way or another contributed to the AMD/ATI opensource effort.

              Comment


              • #17
                Originally posted by mannerov View Post
                and unigine, but possibly nvidia is replacing formats or doing tricks for unigine, so...).
                AMD blob does that too for Unigine benchmarks, it even looks warmer/nicer when replaced

                Joke aside, profiling these are not to do tricks just because or just be top fast... it is to scale well on either iGPUs, dGPUs, CrossFire... whatever setup.
                Last edited by dungeon; 15 February 2017, 03:19 PM.

                Comment


                • #18
                  Originally posted by M@GOid View Post
                  Michael, about that Civilisation VI benchmark, isn't 8x MSAA at 4k resolution a bit too much? I use a 24" 1080p monitor and never go beyond 4X MSAA because I see no benefit on it. In some games I use even less because FXAA is enough. Maybe at less MSAA settings the game can become more playable in Linux?
                  Does Linux get negatively impacted by AA more than Windows? I think all that should matter is if the tests are done equally.

                  Comment


                  • #19
                    When comparing the results (not reading the text at the same time), it is often difficult to figure out which Windows driver is being used (OGL, DX11 or DX12). In some cases, for example Tomb Raider and Shadow of Mordor, I didn't find that info at all.

                    Trying to figure out the cause of the performance differences, I am often wondering about the GPU usage and the CPU usage on each core.

                    Not sure if I should expect a DX11->OGL port to have working (triple-)buffering, but probably no-one here knows either.

                    Comment


                    • #20
                      Originally posted by Michael View Post

                      Unfortunately AMDGPU-PRO doesn't work on 16.10 yet and was already a ton of work doing this comparison as is.... (and judging from the NVIDIA article, I only got 2 or 3 new subscribers, so really barely covered costs for all of that benchmarking as is and remains to be seen if will get any new subscribers or donations from this article)
                      Subscribed.

                      Comment

                      Working...
                      X