Announcement

Collapse
No announcement yet.

OpenGL Threaded Optimizations Responsible For NVIDIA's Faster Performance?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Stellarwind View Post
    I noticed threaded optimizations can actually reduce fps when I was trying to play Wildstar closed beta, so wasn't news to me. Game (with wine) was running like 20 fps without it and dropped to 7-8 fps with a setting enabled.
    While Starcraft 2 will often get a 50% fps boots when enabling it, so don't take anything for granted, even with Wine.

    Comment


    • #12
      Oh, I keep __GL_ actived all the time, as it wasn't causing any troubles, but apparently... http://www.phoronix.com/forums/showt...282#post466282

      Anyway, retested so...

      Metro 2033 Redux
      750 Ti: 53.15
      my 660 Ti w/ __GL_: 47.96
      my 660 Ti w/o __GL_: 74.9
      760: 73.11


      Metro Last Light Redux
      750 Ti: 46.47
      my 660 Ti w/ __GL_: 25.98
      my 660 Ti w/o __GL_: 59.17
      760: 68.99


      Ummm better

      Comment


      • #13
        Could this extension be ported to the mesa drivers?

        Also, great test Michael, it would be interesting to do that against wine games as well, especially the ones mentioned by wine devs in their talk as why nine is not useful.

        Comment


        • #14
          Originally posted by geearf View Post
          Could this extension be ported to the mesa drivers?

          Also, great test Michael, it would be interesting to do that against wine games as well, especially the ones mentioned by wine devs in their talk as why nine is not useful.
          keep in mind that CSMT patches are dubbed as "__GL_THREAD... but with control", mixing them yields nothing as they cover the same ground, afaik.

          Comment


          • #15
            Originally posted by Licaon View Post
            keep in mind that CSMT patches are dubbed as "__GL_THREAD... but with control", mixing them yields nothing as they cover the same ground, afaik.
            I agree with the CSMT part, but that's the theory, it'd also be interesting to know if it is as good or not (maybe better?).

            Also should we consider CSMT patches finish, in the sense that they have reached 90% of their performance benefits, or do we expect more when it'll be done?

            Comment


            • #16
              Originally posted by geearf View Post
              I agree with the CSMT part, but that's the theory, it'd also be interesting to know if it is as good or not (maybe better?).

              Also should we consider CSMT patches finish, in the sense that they have reached 90% of their performance benefits, or do we expect more when it'll be done?
              Since the variable set reduces perfomance in all cases to sub 20 fps of course CSMT is better.
              CSMT patches are very much usable for mostly any game that works under wine, however, it is said that they are being refactored from DX9 to DX10/11.

              Well, there is no CSMT for eon and it kinda shows how ~good~ they are for not testing this shit prior to including this variable in a public release.

              Comment


              • #17
                It is very interesting to see that GL TO can lead to slower performance even for very new titles. But it is simple to understand why the GTX 750 does no improve at all. 1 (Intel) core is already enough to push the GPU to the limit, GPU limited benchmarks never improve if you increase CPU speed (or enable MT). I am sure with a slower (AMD) CPU you would see a difference with that card as well. As not every game is faster you still have to check if it helps, a benchmark mode to compare is really good to have. Would like to see MLL benchmarks - with mesa drivers compared to binary ones.

                Comment


                • #18
                  Originally posted by phoronix View Post
                  Phoronix: OpenGL Threaded Optimizations Responsible For NVIDIA's Faster Performance?
                  http://www.phoronix.com/vr.php?view=21568
                  Does Unigine really use the environment variable? Is there no way it can be overwritten from the program itself (always forcing it enabled or disabled, )? The results are too close.

                  Comment


                  • #19
                    Originally posted by magika View Post
                    Well, there is no CSMT for eon and it kinda shows how ~good~ they are for not testing this shit prior to including this variable in a public release.
                    They are not including anything, although users requested it.

                    Comment


                    • #20
                      Originally posted by Kano View Post
                      It is very interesting to see that GL TO can lead to slower performance even for very new titles. But it is simple to understand why the GTX 750 does no improve at all. 1 (Intel) core is already enough to push the GPU to the limit, GPU limited benchmarks never improve if you increase CPU speed (or enable MT). I am sure with a slower (AMD) CPU you would see a difference with that card as well. As not every game is faster you still have to check if it helps, a benchmark mode to compare is really good to have. Would like to see MLL benchmarks - with mesa drivers compared to binary ones.
                      dota2 using 200% cpu with one core maxed out is not my cpu's fault, it's the translation layers fault
                      latest amd's are decent cpus

                      it's funny when a game runs better on wine then "native"

                      Comment

                      Working...
                      X