Announcement

Collapse
No announcement yet.

NVIDIA 375.10 vs. Linux 4.8 + Mesa 13.1-dev AMD GPU Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #81
    Originally posted by Xen0sys View Post

    Close to refresh rate or not - playability is one thing and optimization is another. It's depressing that a powerful Fury has to go full swing on a game that a 950 can handle with little effort. Says a lot about driver efficiency.
    Let me just point you to some information you maybe didn't know about.
    There seems to be a major problem with NV's default settings in the control panel. Under default, it's supposed to "Let Application Decide" on AF. Yet it overrides in-game settings to **** quality for texture filtering. It started with a guy on OCUK who benched Titan X & Fury X, people noticed...





    Some are old, some are new, the point is nVidia has been cheating on visual fidelity for decades. Ever single time you see nVidia with a higher benchmark number you can simply remind yourself it came at the cost of visual fidelity.

    Comment


    • #82
      Originally posted by Xen0sys View Post
      If it was CPU limited, all cards would be hit proportionately.
      And that's what you are seeing... all of the cards running AMD drivers get the same hit, and all the cards running NVidia drivers get the same hit.

      I expect that running the same tests at higher resolution would show different results, since some of the cards would be GPU-limited. Look at CS:GO for an example - the issue here is that Bioshock was only run at 1080p.

      If we ever have a lot of free time I would be really tempted to work on a hacked-up set of drivers that cut out most of the API validation code, so we can have a more apples-to-apples comparison.

      Originally posted by Xen0sys View Post
      I'm not over-dramatizing things. The results speak for themselves. An enthusiast level card gets shown up but an entry level card.
      With respect, I think you are over-dramatizing. If the apps were GPU-limited that would be a different story.
      Last edited by bridgman; 25 October 2016, 11:31 AM.
      Test signature

      Comment


      • #83
        Edit: Reply stuck in mod queue

        Comment


        • #84
          Originally posted by dungeon View Post

          I think that main reason for difference there is that VP's eON pretty much depends on non-default nvidia driver variable

          So without breaking news on Phoronix that threaded GL works on Gallium, do not expect something to change
          Is that happening? Sounds pretty valuable.

          Comment


          • #85
            Originally posted by Xen0sys View Post

            Is that happening? Sounds pretty valuable.
            Don't listen to that nonsense. The right place to put threading is in the game engine, not the gl driver. OpenGL was not made for multithreading. That's a workaround that's best left for game devs.

            Comment


            • #86
              Originally posted by Xen0sys View Post
              Is that happening? Sounds pretty valuable.
              Yeah, preaty much all eON ports are CPU bound... i guess they first put variable in script then start porting Where porting is not actually porting but more something like actually configuring Cedega

              Comment


              • #87
                Originally posted by bridgman View Post
                Huh ? Why would you say that ? Don't the drivers run on the CPU too ?
                If the game was universally FPS bound (by CPU), it wouldn't matter how efficient the GPU driver was. You'd see all cards regardless of power maxing out at a certain FPS ceiling. Instead the results show NVIDIA scaling linearly upwards and idk perhaps the AMD drivers are CPU bound as they are all FPS limited in the 92-97 range.

                Originally posted by bridgman View Post
                Yeah, I think you are. If the app were GPU-limited that would be a different story.
                If B:I was CPU limited you wouldn't see the linear increase in FPS for each NVIDIA GPU. If anything it's the AMD drivers that are CPU limited in the context of the app.

                It is a big deal that someone who may or may not have bought a Fury (~$500) could have theoretically done just as well with a 950 (~$200).

                Comment


                • #88
                  Originally posted by Xen0sys View Post
                  If B:I was CPU limited you wouldn't see the linear increase in FPS for each NVIDIA GPU. If anything it's the AMD drivers that are CPU limited in the context of the app.
                  Isn't that what I said ?

                  Originally posted by Xen0sys View Post
                  It is a big deal that someone who may or may not have bought a Fury (~$500) could have theoretically done just as well with a 950 (~$200).
                  If you are only playing one specific CPU-limited game at low resolution then yes you could have done just as well with a 950.

                  If you want to play any *other* games, or game at higher resolutions, now that's a different story.
                  Last edited by bridgman; 25 October 2016, 11:44 AM.
                  Test signature

                  Comment


                  • #89
                    Originally posted by Xen0sys View Post

                    If the game was universally FPS bound (by CPU), it wouldn't matter how efficient the GPU driver was. You'd see all cards regardless of power maxing out at a certain FPS ceiling. Instead the results show NVIDIA scaling linearly upwards and idk perhaps the AMD drivers are CPU bound as they are all FPS limited in the 92-97 range.



                    If B:I was CPU limited you wouldn't see the linear increase in FPS for each NVIDIA GPU. If anything it's the AMD drivers that are CPU limited in the context of the app.

                    It is a big deal that someone who may or may not have bought a Fury (~$500) could have theoretically done just as well with a 950 (~$200).
                    Well at least until you adjust game and driver settings to around refresh rate and then consider visual fidelity. At which point AMD clearly will win.

                    Comment


                    • #90
                      Originally posted by bridgman View Post
                      Isn't that what I said ?
                      Not really no. The game isn't CPU limited because NVIDIA drivers for that game are not CPU limited. What is CPU limited are the AMD drivers in the context of this game.

                      You did not say that.

                      Originally posted by bridgman View Post
                      If you are only playing one specific CPU-limited game at low resolution then yes you could have done just as well with a 950.

                      If you want to play any *other* games, or game at higher resolutions, now that's a different story.
                      AMD must be universally as good or better in everything for success.

                      You can see the 950 beating the Fury in CS:GO @ 1080p also. It's not just one game.

                      You can see the same in Metro - most of the big title game benchmarks in this round are evidence against your claim.

                      ---

                      If the Fury were the same relative cost and power efficiency as the 950 then this wouldn't matter one bit.
                      Last edited by Xen0sys; 25 October 2016, 11:51 AM.

                      Comment

                      Working...
                      X