Announcement

Collapse
No announcement yet.

NVIDIA GeForce RTX 3080 Linux Gaming Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • NVIDIA GeForce RTX 3080 Linux Gaming Performance

    Phoronix: NVIDIA GeForce RTX 3080 Linux Gaming Performance

    After last week exploring the NVIDIA GeForce RTX 3080 Linux GPU compute performance for this Ampere graphics card along with the Blender 2.90 performance, today is a look at the Linux gaming performance for the RTX 3080 both for native games as well as those Windows games running on Linux via Steam Play (Proton).

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    The performance difference versus the 2080 Ti doesn't seem quite as impressive as other benchmarks have put it. Maybe this is due to the higher CPU cost of running a game via Proton (or various in-house Direct3D -> OpenGL wrappers)?

    Comment


    • #3
      Nobody serious in Linux and Open Source touches this binary-only blob like it's Covid. And AMD's equal performance Big Navi RDNA2 is just around the corner, soooo, .... yolo.

      Comment


      • #4
        They say the AMD 6000 series will be as much as double the 5700XT performance. It will be interesting to see if that actually transpires under Linux as it might see the top card beating the 3080 which would be quite a twist.

        Comment


        • #5
          Look at those minimums.. nice..

          Comment


          • #6
            avg 18% faster than the last generation? Idk man, for all the hype these 3000 series cards are getting; they sure aren't that impressive, even the price drop is not mpressive, since the pricing of the 2000 series was completely outrageous in the first place...

            Comment


            • #7
              Michael
              Any chance to get a geometric mean per watt?

              Comment


              • #8
                Originally posted by rabcor View Post
                avg 18% faster than the last generation? Idk man, for all the hype these 3000 series cards are getting; they sure aren't that impressive, even the price drop is not mpressive, since the pricing of the 2000 series was completely outrageous in the first place...
                I was just about to say that. Double the performance over a 2080 was promised. The blatant lie about having twice the CUDA cores with a bogus figure of 30 TFlops was touted. All in all, it is a good yet power hungry GPU with long overdue price cuts and a bunch of toyish seeming software gimmicks. Still, it is nowhere as great a leap or as good a value as many people seem to believe:

                History doesn't lie.♥ Check out https://adoredtv.com for more tech!♥ Subscribe To AdoredTV - http://bit.ly/1J7020P ► Support AdoredTV through Patreon https:/...

                Comment


                • #9
                  Originally posted by rene View Post
                  Nobody serious in Linux and Open Source touches this binary-only blob like it's Covid. And AMD's equal performance Big Navi RDNA2 is just around the corner, soooo, .... yolo.
                  Omit the binary blob, the firmware, of your AMD card with its Open Source Driver and see how many FPS you get then

                  Comment


                  • #10
                    Originally posted by Calinou View Post
                    The performance difference versus the 2080 Ti doesn't seem quite as impressive as other benchmarks have put it. Maybe this is due to the higher CPU cost of running a game via Proton (or various in-house Direct3D -> OpenGL wrappers)?
                    In general (i.e. even on Windows) the performance difference can be quite varied, i.e. for 1080p in a lot of cases you don't really get that much more performance.

                    It would be interesting if there are performance differences between Windows and Linux, but historically speaking NVidia sets a high standard for their blob so minus some occasional regressions there shoudn't be a huge performance difference between the systems.

                    Comment

                    Working...
                    X