Announcement

Collapse
No announcement yet.

Marek Squeezes More Performance Out Of RadeonSI In CPU-Bound Scenarios

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Marek Squeezes More Performance Out Of RadeonSI In CPU-Bound Scenarios

    Phoronix: Marek Squeezes More Performance Out Of RadeonSI In CPU-Bound Scenarios

    AMD's leading open-source RadeonSI Gallium3D developer, Marek Olšák, sent out a new patch series this week aiming to benefit this Radeon OpenGL driver's performance in CPU-bound scenarios...

    http://www.phoronix.com/scan.php?pag...onSI-CPU-Bound

  • #2
    So do I get this right that it simply reduces cpu overhead a bit? Does it therefore mean lower cpu usage everywhere? Certainly desirable

    Comment


    • #3
      I wonder if I can borrow Marek my Sgi Octane, so I do not need to write the Impact and Odyssey X/3d drivers ;-) https://www.youtube.com/watch?v=0dchE5DHSIc

      Comment


      • #4
        Wonder if this might help Dota2 and such cases where RadeonSI seems to underperform a bit.

        Comment


        • #5
          Do they have to start from zero again with the new GPU generation?

          Comment


          • #6
            Originally posted by Brisse View Post
            Wonder if this might help Dota2 and such cases where RadeonSI seems to underperform a bit.
            I think the main issue in Dota 2 is that it's not well optimized for Radeons when the pure GPU performance is limiting.

            Comment


            • #7
              Hope they'll find time for fixing bugs before releasing 18.2, be that long-standing system freezes or recent regressions in some games.

              Comment


              • #8
                Instead of tuning for workloads like glxgears I would hope the RadeonSI-devs could focus on glamor overhead / performance.
                By layering 2D in top of OpenGL 2D performance took quite a hit (just compare intel SNA to RadeonSI glamor) and I even saw some major regressions durin RadeonSI development - and as far as I can see those cases don't get a lot of attention (despite the fact that *every* linux users uses 2D -> glamor while only a few do actual gaming on their linux machines.

                Comment


                • #9
                  AFAIK most of the tuning focus so far has been on real world workloads - but some of the optimization work involves looking at each function within the driver and thinking about how to make it faster. Each of those optimization might show the biggest improvement in a different class of scenarios / workloads.

                  In this case I believe Marek used glxgears as a reference specifically because he was looking at command submission, and glxgears is a good example of an app that has relatively little real work per command submission (hence the very high frame rates).

                  Are there real world scenarios (the old X server benchmarks don't correlate well to real world performance) where you see glamor performance lacking ?
                  Last edited by bridgman; 07-16-2018, 02:25 AM.

                  Comment


                  • #10
                    Originally posted by Kemosabe View Post
                    Do they have to start from zero again with the new GPU generation?
                    not really.

                    Comment

                    Working...
                    X