Announcement

Collapse
No announcement yet.

The Linux 3.13 Kernel Is A Must-Have For AMD RadeonSI Users

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • The Linux 3.13 Kernel Is A Must-Have For AMD RadeonSI Users

    Phoronix: The Linux 3.13 Kernel Is A Must-Have For AMD RadeonSI Users

    The Linux 3.13 kernel that will be released in the very near future is very worth the upgrade if you are a RadeonSI user -- in particular, the Radeon HD 7000 series GPUs and newer on the Gallium3D Linux graphics driver -- but other open-source graphics driver users as well may also see nice improvements in the new kernel release. Here's some benchmarks showing off the gains found with the Linux 3.13 kernel for Radeon HD and R9 graphics cards.

    http://www.phoronix.com/vr.php?view=19688

  • #2
    Impressive.

    BTW I've just disabled adblock for phoronix.com domain, I don't know why I didn't that before...

    Comment


    • #3
      That's pretty huge. I would love to see the difference in more modern games, however -- perhaps Dota2 or TF2, or if anyone has any comparisons between Wine performance.

      Comment


      • #4
        Thank you Michael for the benchmarks.

        On a small note: the fix for enabling all RadeonSI render backends also made it to kernel 3.12.7, which version were you using in your test? On my HD7850 I didn't notice any improvements, but maybe I wasn't affected by it in the first place.

        https://www.kernel.org/pub/linux/ker...angeLog-3.12.7

        Code:
        commit f3c1f0d0aaf20f9dee35ae99ec8b8705af4dc60e
        Author: Marek Olk <marek.olsak@amd.com>
        Date:   Sun Dec 22 02:18:00 2013 +0100
        
            drm/radeon: fix render backend setup for SI and CIK
            
            commit 9fadb352ed73edd7801a280b552d33a6040c8721 upstream.
            
            Only the render backends of the first shader engine were enabled. The others
            were erroneously disabled. Enabling the other render backends improves
            performance a lot.
            
            Unigine Sanctuary on Bonaire:
              Before: 15 fps
              After:  90 fps
            
            Judging from the fan noise, the GPU was also underclocked when the other
            render backends were disabled, resulting in horrible performance. The fan is
            a lot noisy under load now.
            
            Signed-off-by: Marek Olk <marek.olsak@amd.com>
            Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
            Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

        Comment


        • #5
          Originally posted by r1348 View Post
          Thank you Michael for the benchmarks.

          On a small note: the fix for enabling all RadeonSI render backends also made it to kernel 3.12.7, which version were you using in your test?
          The versions are shown on the system table in the article...
          Michael Larabel
          http://www.michaellarabel.com/

          Comment


          • #6
            Originally posted by dffx View Post
            That's pretty huge. I would love to see the difference in more modern games, however -- perhaps Dota2 or TF2
            See the note in the article why Source Engine games weren't used.
            Michael Larabel
            http://www.michaellarabel.com/

            Comment


            • #7
              Also.. http://kparal.wordpress.com/2014/01/...edora-rawhide/
              I was running on nodebug kernel 3.13 rc8 (beware of my mistakes).
              There, modern games

              Comment


              • #8
                Maybe now AMD Steam Box can use RadeonSI?

                Comment


                • #9
                  Hi,

                  Anyone know how the FirePro W600 is performing with RadeonSI? I'm wondering about W600 (instead of 2x NVS 510) for a six monitor setup.

                  Thanks!

                  Comment


                  • #10
                    I didn't notice any performance increase with 3.13 on HD 7950.
                    ## VGA ##
                    AMD: X1950XTX, HD3870, HD5870
                    Intel: GMA45, HD3000 (Core i5 2500K)

                    Comment


                    • #11
                      Originally posted by Michael View Post
                      See the note in the article why Source Engine games weren't used.
                      LOL. When I saw that in the article, I was wondering if anyone would ask the question and what your response would be. I'm surpised and impressed.

                      Comment


                      • #12
                        Originally posted by darkbasic View Post
                        I didn't notice any performance increase with 3.13 on HD 7950.
                        If you forced DPM on and had 3.12.7 or later you had most of the bulk performance increases.

                        What I'm worried about is now that the driver is using all its render cores at their rated speeds, the easy performance grabs are gone, and yet performance is still abyssal. The same cards under Catalyst would still be performing twice as fast in most cases. So what else is there to fix?

                        Comment


                        • #13
                          Originally posted by zanny View Post
                          If you forced DPM on and had 3.12.7 or later you had most of the bulk performance increases.

                          What I'm worried about is now that the driver is using all its render cores at their rated speeds, the easy performance grabs are gone, and yet performance is still abyssal. The same cards under Catalyst would still be performing twice as fast in most cases. So what else is there to fix?
                          Two guesses, and one factoid...

                          Guess One: Poorly optimized shader generation. LLVM was supposed to be better than r600, but maybe its not perfect yet.

                          Guess Two: Poorly (or not at all) optimized code paths. Development is: make it work THEN make it fast.

                          Factoid: Radeon in general has fairly poor memory allocation in comparison to Catalyst, and the better your card is the more obvious this becomes because then Memory becomes the bottleneck. By poor memory allocation I mean the driver is kind of 'dumb' about where things should be in memory, whats safe to move out of GPU memory / just drop completely, and the likes. Its on the 'volunteer todo list' last I heard, because optimized memory algorithms were never on the original AMD roadmap and plans.

                          One additional note... In most cases, especially high end cards, Catalyst will ALWAYS BE FASTER. Kernel and Mesa would never accept code that said

                          If (CardID == 7970)
                          {
                          Codepath 1
                          }
                          else If (CardId == 7770)
                          {
                          Codepath 2
                          }
                          else // See: Low and medium cards
                          {
                          CodePath 3
                          }

                          Meanwhile Catalyst just might, and probably does, because they have a financial incentive to make sure that every single card gets the most performance that it can get, even if it means micro-managing code paths. The Kernel and Mesa devs will accept the code that works the best on the most cards as possible, and is the most maintainable, even if that means maybe only hitting 90% performance of the possible because you've got a high end card

                          Comment


                          • #14
                            ohhhhhh.....

                            http://kparal.fedorapeople.org/blog/.../composite.xml

                            I like the number of games that are getting close (and a few surpasing!) catalyst.

                            It will be interesting to see Micheal's comparison (please include the ungine benchmark :-)

                            Comment


                            • #15
                              Originally posted by zanny View Post
                              If you forced DPM on and had 3.12.7 or later you had most of the bulk performance increases.

                              What I'm worried about is now that the driver is using all its render cores at their rated speeds, the easy performance grabs are gone, and yet performance is still abyssal. The same cards under Catalyst would still be performing twice as fast in most cases. So what else is there to fix?
                              Ok first see what Ericg posted. his
                              Guess Two: Poorly (or not at all) optimized code paths. Development is: make it work THEN make it fast.
                              is the big issue. The 7000 series just got caught up with openGL support late last year. So they are just now starting work on optimization.


                              So what else is there to fix?
                              * There are several optimizations that they have implemented for r600 cards that haven't been ported yet.
                              * 2D (GLAMOR) has allot of potential optimizations. Its almost completely unoptimized. They have just started working on speeding it up. Interestingly, since glamor uses 3d calls, some of the 3d optimizations should speed up 2D.
                              * Ram usage.
                              * Shader compilation.

                              Comment

                              Working...
                              X