Announcement

Collapse
No announcement yet.

RadeonSI/RADV Mesa 17.3 + AMDGPU DC vs. NVIDIA 387.12 Linux Gaming Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Can you also try with the xf86-video-amdgpu driver rather than modesetting, you're running 1.19.3 which was released back in March

    Comment


    • #22
      Originally posted by marek View Post

      The bad ParaView performance seems to be an inefficiency... well almost a bug in Mesa. I'm working on a fix right now.
      Let us (me ;-) know, if you got sth.
      Greetings and good night for today, here from old Germany.

      Comment


      • #23
        bridgman nuetzel False alarm. I know where the bottleneck is. I don't know how to make it faster yet.

        Comment


        • #24
          Originally posted by marek View Post
          bridgman nuetzel False alarm. I know where the bottleneck is. I don't know how to make it faster yet.
          Not sure if this bit of info is worth anything for bit of inspiration or code path comparison, but.... I was originally recommended to ParaView for workstation graphics testing by the Intel guys working on the SWR rasterizer as one of the workstation graphics tests where SWR vs. LLVMpipe can show a big difference in performance in their code paths.
          Michael Larabel
          https://www.michaellarabel.com/

          Comment


          • #25
            Originally posted by Leopard View Post
            Min fps is a major metric.
            you've been already told that it is useless metric because it doesn't say how often it happens. if it happened once, you can ignore it

            Comment


            • #26
              Originally posted by pal666 View Post
              you've been already told that it is useless metric because it doesn't say how often it happens. if it happened once, you can ignore it
              Especially if you get that 3fps on a loading screen!

              Comment


              • #27
                bridgman nuetzel The problem is vertex buffers are in RAM, not VRAM. I can get 4x better FPS if I put them in VRAM. This is a driver decision, not app decision - it's fixable universally. That should get us to 18 FPS for Vega. The next bottleneck is the GPU vertex throughput. Not sure what we can do with that - maybe primitive shaders in Vega, which would be an insane amount of work.

                Comment


                • #28
                  Originally posted by Leopard View Post

                  Min fps is a major metric.

                  Did you even encountered one of them on CS:GO and Rocket League , Dota 2 like games?

                  It is not so important on single player games that much , but it is critical on multiplayer.

                  Also go do your AMD fanboying somewhere else. I also love AMD but i'm smart enough for not calling bad results as good ones.
                  Ok, so everyone that doesn't agree with you is a fanboy of the opposite side? That's the very definition of fanboism.

                  Minimum fps doesn't tell you much as it just shows you one single moment. The GPU could perform smoothly overall but a single moment and receive harsh min-fps rating. Another GPU could have better min-fps but reach it more often and the whole experience would be worse due to lack of smoothness.

                  What you really want is 95th or 99th percentile which tells you the minimum fps of best 95% or 99% of frames. This way you remove outliers from the data and can actually tell something about smoothness.

                  Also, take a look at the first Mad Max, where Fury has min fps of 88.25 and 1070 has 1.67. Do you tell me Fury is ~50x more smooth than 1070 here? I think this proves how useless of a metric min fps is.

                  Comment


                  • #29
                    VEGA64 only %10 faster then a R9 Fury card, truly my brain struggles with this..... (in actual game performance)

                    This sort of issue really needs to be investigated by AMD, the Vega64 should be at minimal %30 faster then a standard R9 Fury, SURELY!¿
                    Last edited by theriddick; 26 October 2017, 03:55 AM.

                    Comment


                    • #30
                      Originally posted by pal666 View Post
                      you've been already told that it is useless metric because it doesn't say how often it happens. if it happened once, you can ignore it
                      Which is why it's getting more popular nowadays to show 1% and 0.1% minimum framerates as well as frametimes instead of fps.

                      Comment

                      Working...
                      X