Announcement

Collapse
No announcement yet.

Mesa 17.1-dev vs. AMDGPU-PRO 16.60 vs. NVIDIA 378 Linux Gaming Tests

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by funfunctor View Post
    spilling?
    Either spilling or Windows 10 Since i dunno how better to understand that Bridgman's "joke"

    Originally posted by bridgman View Post
    The 460 is faster with open source than closed source on all the games that don't start with 'D'.
    If you ask me, there is *major* problem generally there... since even Vulkan does not scale
    Last edited by dungeon; 30 January 2017, 07:39 PM.

    Comment


    • #12
      Originally posted by dungeon View Post
      RX 460 was like that since beginning, lower than expected performer on opensource driver even in many cases slower than R7 260 Bonaire. RX 460 should not be sometimes but *always* faster than that ... so yeah there is something weird with P11 specifically, i guess no one of opensource devs has a card

      I wouldn't consider comparison with amdgpu-pro driver as much valuable or to claim where things should be, since that has their own perf regressions since fglrx time
      I'm not seeing it (at least relative to RX 480) - the 460 has <40% of shader power and ~50% of everything else, and it's running around 45% the speed of the 480. There is one exception where the difference is more, guessing that is memory size related with fewer heuristics in the open source stack for not-quite-enough-VRAM scenarios.

      I agree that 460 should be faster than 260 in all cases though.

      And CAN YOU TURN OFF THOSE DAMN SMILIES ?
      Last edited by bridgman; 30 January 2017, 08:40 PM.
      Test signature

      Comment


      • #13
        The Dirt:Showdown results are absolutely unbelievable and incompatible to my own results.
        My results with an RX 480 & MESA 17.1-devel:
        1440p@"Ultra low", 4xMSAA: 157 FPS AVG
        1440p@"Ultra", 4xMSAA: 23 FPS AVG
        1440p@"Ultra", 4xMSAA, but Advanced Lighting off: 93 FPS AVG

        Adv. Lighting is pretty sure buggy but with a 4K monitor it shouldn't go up to >30 FPS from 1440p, 23FPS. So something seems to be wrong.
        Last edited by oooverclocker; 30 January 2017, 09:04 PM.

        Comment


        • #14
          Originally posted by bridgman View Post
          And CAN YOU TURN OFF THOSE DAMN SMILIES ?
          Why me? I just write smile signs and forum translate it to these pictures i didn't draw them for sure
          Last edited by dungeon; 30 January 2017, 09:40 PM.

          Comment


          • #15
            People are aware that the Fury is a 8.5TFLOP card while the 980Ti is 6.5TFLOP! It does make one scratch their head pretty hard at these results sometimes wondering what on Earth is sabotaging the AMD performance, even the Fury under windows under-performs compared to the 980ti (by 5-10fps) yet the hardware itself is fully capable of kicking Nvidia's ass.

            If AMD was able to get their drivers up to snuff then the Fury might have been sufficient until Vega comes, which I have concerns over the drivers for - we are probably going to see a 12.5TFLOP perform like a GTX980.. its going to be upsetting to see those results, we all know that's whats going to happen unless AMD can double-triple driver optimization before then?!?

            Comment


            • #16
              Originally posted by theriddick View Post
              People are aware that the Fury is a 8.5TFLOP card while the 980Ti is 6.5TFLOP!
              Incorrect, this Fury (without X) is 7 TFLOPS, while Fury X is 8.5 TFLOPS

              Comment


              • #17
                Ok, cool. So we should be seeing on par to 980ti. Getting there I guess, still allot of head scratching issues at 1080p resolutions, are the drivers CPU bound for some reason?

                Comment


                • #18
                  Fury should be on par with 980 and Fury X should be on par with 980Ti... Pro Duo is faster than anything (16 TFLOPS) on single slot but we don't have a drivers for that beast

                  Yeah drivers are CPU bound, GPU bound, memory bandwidth bound, compiler bound... it is all about limitations, so depends, so per case. But i think that opensource stack is also more GLX bound and KMS bound (particulary these two might indirectly make scalabilty to look entirely different) than nVidia blob But yeah, CPU boundware is most common for everybody.

                  I always said give Temash APUs to developers as that is the only way to fix CPU boundware, as if CPU slowness doesn't starts to go on someone nerves, they won't fix it... it is fasion nature of buying new CPUs which will automatically fix the issue for decades

                  But it is a lost cause, as now we will all buy Zen CPUs and they won't fix it again, instead they will say buy Zen it is new it is cheap and start develop Gnome 4 better with even more wasted trapezoides

                  Just give to GPU driver developer fastest CPU and he will immediately became lazy and won't optimize nothing
                  Last edited by dungeon; 30 January 2017, 11:57 PM.

                  Comment


                  • #19
                    Originally posted by debianxfce View Post
                    Also phoronix benchmarks are out of real world with 640 usd cpu.
                    But that has greatest avx2 performance, good showcase for Clear Linux... and when you put average 10 times less cost Pentium without avx units you see nothing
                    Last edited by dungeon; 31 January 2017, 12:31 AM.

                    Comment


                    • #20
                      Originally posted by theriddick View Post
                      Ok, cool. So we should be seeing on par to 980ti. Getting there I guess, still allot of head scratching issues at 1080p resolutions, are the drivers CPU bound for some reason?
                      There are a million different reasons that FLOPs don't necessarily match up with game performance. FLOPs is basically a micro-benchmark, and while those tell you something about the hardware it's generally not a lot and very specific to the test. It'd be like saying 1 card is faster at glxgears and then wondering why it didn't perform at the same relative speed in Doom.

                      My guess as to the real underlying cause is this: http://www.anandtech.com/show/10536/...ation-analysis which presumably lets them "reduce the memory bandwidth for rendering" which would lead to better performance, lower power usage, and in turn higher clocks (and more performance).
                      Last edited by smitty3268; 31 January 2017, 12:57 AM.

                      Comment

                      Working...
                      X