Announcement

Collapse
No announcement yet.

AMD Radeon Gallium3D Is Catching Up & Sometimes Beating Catalyst On Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • AMD Radeon Gallium3D Is Catching Up & Sometimes Beating Catalyst On Linux

    Phoronix: AMD Radeon Gallium3D Is Catching Up & Sometimes Beating Catalyst On Linux

    Last week I shared some preview benchmarks from Steam on Linux showing Radeon Gallium3D starting to beat Catalyst. In this article are the full results from comparing the open and closed-source AMD Linux graphics cards with sixteen Radeon graphics cards while testing Team Fortress 2 and Counter-Strike: Global Offensive on Linux. The results yield a very close race!

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Good to hear. Catalyst seems to be more or less dead and I don't think AMD will support older cards with its new driver model.

    Comment


    • #3
      Originally posted by eydee View Post
      Good to hear. Catalyst seems to be more or less dead and I don't think AMD will support older cards with its new driver model.
      Actually I think it will until AMDGPU can take over. The new GPUs might be released before the new driver is upstreamed.

      Comment


      • #4
        Originally posted by SolidSteel144 View Post
        Actually I think it will until AMDGPU can take over. The new GPUs might be released before the new driver is upstreamed.
        Yeah, sometimes they fix some rare bugs that affect 0.00001% of the users and put a new version number on it. Performance improvements? Not really... It is dead.
        The fact that they release new GPUs doesn't mean we have to buy them. My card works fine, I won't replace it until it breaks. (Then I'm not even sure I'll get another one from AMD, especially for linux.)

        Comment


        • #5
          Originally posted by eydee View Post
          Yeah, sometimes they fix some rare bugs that affect 0.00001% of the users and put a new version number on it. Performance improvements? Not really... It is dead.
          The fact that they release new GPUs doesn't mean we have to buy them. My card works fine, I won't replace it until it breaks. (Then I'm not even sure I'll get another one from AMD, especially for linux.)
          Considering that AMD is the only high performance GPU vendor which supports open source and Nvidia does everything in their power to lock customer and developers into their own ecosystem (proprietary solutions enforced often by paid deals with publishers, Gsync, Gameworks, PhysX, CUDA, no access to source code etc), my next card is going to be AMD aswell.

          Comment


          • #6
            AMD rule

            Too bad that you can count AMD laptops under 13" on the fingers of one hand...

            (Technically you can choose between 3 models, overpriced & shitty HP EliteBook, red & shitty HP Pavilion and low-end Lenovo...)

            Comment


            • #7
              Testing Raw Hardware Fill Rate, NOT Driver Efficiency

              If the goal is to show differences in drivers, you need to run at resolutions that aren't fill rate limed. When older cards are only getting 4FPS and $500 top of the line, can barely pull 100FPS, you're testing raw hardware fill rate, not comparing driver efficiency.

              Comment


              • #8
                Have to gently disagree here. The limiting factor tends to be shader throughput not fillrate, and it's only quite recently that the open source drivers have been able to push geometry info into the hardware fast enough to approach HW limits. That needs a fairly high degree of parallel operations in the driver stack, including multiple driver threads and having DMA transfers run in parallel with drawing operations.

                Running tests at lower resolutions will provide other useful info (approaching what we call "null HW" testing internally, where you're only measuring driver overhead) but the main question users have had for the last few years is "when can I do high res gaming on the open source drivers ??".
                Last edited by bridgman; 17 November 2014, 03:34 PM.
                Test signature

                Comment


                • #9
                  Originally posted by bridgman View Post
                  Have to gently disagree here. The limiting factor tends to be shader throughput not fillrate, and it's only quite recently that the open source drivers have been able to push geometry info into the hardware fast enough to approach HW limits. That needs a fairly high degree of parallel operations in the driver stack, including multiple driver threads and having DMA transfers run in parallel with drawing operations.

                  Running tests at lower resolutions will provide other useful info (approaching what we call "null HW" testing internally, where you're only measuring driver overhead) but the main question users have had for the last few years is "when can I do high res gaming on the open source drivers ??".
                  Thanks for everything. One question. Will the Open Source driver do any game specific optimizations? I'm not a driver guru but thought I heard that happens in the Windows world.

                  Comment


                  • #10
                    @Michael

                    Is it on the todo list to group the boxplots as well? They would be much more comparable if they were also grouped by card, and shown with different colors, like the scores are.

                    Also GPU usage would've been good

                    Comment

                    Working...
                    X