Announcement

Collapse
No announcement yet.

Here's Why Radeon Graphics Are Faster On Linux 3.12

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by schmidtbag View Post
    I seriously cannot believe the amount of finger-pointing going on here, and sadly its making me lose a lot of respect for people who I otherwise thought were humble and hard-workers.


    It is NOT by any means Michael's fault why the results came out this way. He is not obligated to run under a different governor, just as he isn't obligated to use some obscure distro, make some minor kernel tweak or tweak the drivers for individual tests - the point of these benchmarks is to show what the average person will/should encounter from everyday upgrades. If the governor is known to be problematic then fine, switch out of it, but if the CPU is actually underclocking due to a lack of stress, that really gets me to think the governor is not the problem.

    What Michael is obligated to do is benchmark using the most typical/average software setup, and a hardware setup that has the lowest probability of skewing the results (meaning, his choice of CPU, mobo, RAM, and storage were fine for testing GPUs, because all of those parts are good enough that they SHOULDN'T be a bottleneck). If you want benchmarks for the utmost highest possible results, you're in the wrong place and always have been. Even if this website was strictly benchmarks and nothing else, no single person would ever have the time to set up some of the silly or unrealistic requests here.

    I'm not (yet) blaming the driver developers either, since they were being affected by an outside source.

    HOWEVER

    The CPU Michael used was better than almost anything AMD has to offer. That being said, it is absolutely unacceptable for the drivers to be THAT held back by the CPU, even in its low-freq state. This could mean that APUs are behind in performance simply because the CPU isn't fast enough. I'm not bashing AMD CPUs either - AMD CPUs are fully capable of playing most modern games without being maxed out.

    But like I said, I'm not yet blaming the driver developers. I think tests should be done with catalyst the HD6870 (because that had the greatest performance impact) and see how much of a difference that makes. If, while testing catalyst, the 3.11 ondemand vs 3.12 ondemand has a performance impact less than 5%, that's where I think the open source radeon drivers are the blame.
    I agree fully with you, we do not need to look for people to blame, we need solutions.
    I am a user, so I can only report problems.

    I am also a gamer, and I am very happy to see, that the games start to run as well on Linux as they do on windows.
    We need to look forward, and not look back, not for people to blame.

    Comment


    • #52
      AMD bulldozer CPU's still need perf governor for maximum framerate

      Originally posted by schmidtbag View Post
      The CPU Michael used was better than almost anything AMD has to offer. That being said, it is absolutely unacceptable for the drivers to be THAT held back by the CPU, even in its low-freq state. This could mean that APUs are behind in performance simply because the CPU isn't fast enough. I'm not bashing AMD CPUs either - AMD CPUs are fully capable of playing most modern games without being maxed out.
      I checked the governors impact on my machine (FX-8120 and HD6750) last night, on Critter, which seems to stress my cards more than anything else I have. No difference between Linux 3.11 or 3.12 that I can tell observing framerates manually, setting the CPU governor to performance or just to max frequency in indicator-cpufreq still more than doubles the framerate. Critter is only a 2d game written in Opengl, but because it is not CPU choked like 0ad and apparently scorched3d as well seems to isolate the graphic card by running at very high framerates. In CPU limited games the OnDemand governor works fine-because the game holds the CPU wide open, at least on Phenom II and on Bulldozer CPU's, the only ones I have tested.

      Comment


      • #53
        Originally posted by schmidtbag View Post
        The CPU Michael used was better than almost anything AMD has to offer. That being said, it is absolutely unacceptable for the drivers to be THAT held back by the CPU, even in its low-freq state. This could mean that APUs are behind in performance simply because the CPU isn't fast enough. I'm not bashing AMD CPUs either - AMD CPUs are fully capable of playing most modern games without being maxed out.
        I doubt that the i7 4770K is faster than most AMD cpus when it is running only at 900MHz. I'd say that the drivers are very efficient if games can run at hunders of frames per second with cpu in lowest power state.

        And for everyone, please, read the article before causing lot of noise here.
        As far as I can see, the article correcty states what is causing great graphics performance (with Radeon) in 3.12 kernel.

        Comment


        • #54
          Thanks Michael.

          I would also request a test with an AMD cpu + discrete GPU. Any GPU will do, so just a single test to see if only Intel CPUs were affected.

          Comment


          • #55
            Originally posted by monraaf View Post
            So, is this really a improvement?

            It just means the open source AMD radeon driver depends too much on the cpu instead of, duh, the processing power of the graphic card.
            The other drivers are not affected because they actually use the graphic card instead of the CPU.

            Also I don't think always keeping the CPU on its limit is good either, I can imagine there is much more power wasted now since the goal for power saving is too sleep as much as possible and when in use, on a frequency as low as possible.

            Am I wrong? Maybe, but I doubt this really is the "next big thing"
            Unfortunately that's how the Evergreens work.
            Most of AMD's GPUs are really lack of protections on hardware, so the driver needs to handle all the secure issues.

            SI GPUs have improved a lot but not enough. Hoping GCN2 will give me some surprise.

            Comment


            • #56
              Nouveau also affected - https://twitter.com/michaellarabel/s...54788672778240
              Michael Larabel
              https://www.michaellarabel.com/

              Comment


              • #57
                Originally posted by pingufunkybeat View Post
                I understand the rationale for using default settings only.

                It would make sense to have two tests -- one with all defaults, and one configured for maximum performance.
                That would be ideal, but Michael is too busy for that. It would be more interesting and realistic if it was always maximum performance except for the multi-distribution benchmarks. They are meant to show the differences between defaults, and having that as a guideline average-joe-index would be pretty fine in my opinion.

                However, optimising things is not as easy as using defaults, too. But maybe it would be possible to create an auto-optimiser script that would do the work, like setting the performance governor, disabling swap buffers wait/vsync, turning off compositing etc.?

                Comment


                • #58
                  Originally posted by s_j_newbury View Post
                  I'm not sure I really buy the default argument. The default it to have FPS rate limited to the display refresh rate, and rightly so. If there's a call to measure performance with games far below the capabilities of the gfx hw such that they are cpu bound, wouldn't a more meaningful benchmark be overall power utilisation? What use case is the in running at multiple 100s fps more than you can see?
                  It's only for testing purposes and to see how drivers are evolving...

                  Personally, as a gamer, i care less if any game can or not pass 60 FPS.
                  High frame rates can actually be bad...

                  I give an example...in "Enemy Territory : QUAKE Wars" i could have my hardware reaching easily 120 FPS...but i was having extremely difficulty to hit anyone before they hit me and kill me...the i read somewhere that gamers should try to keep the FPS as stable as possible....

                  I notice that my frame rate was actually jumping from 120 FPS down to as low as 80 FPS...so, in the .cfg file changing some settings i locked/limited (w/o using Vsync) frame rate to 60 FPS...

                  Then my Frame Rate changed from 59 FPS min to 61 FPS max...and suddenly i start to hit and kill enemies with ease

                  Comment


                  • #59
                    Originally posted by 89c51 View Post
                    Average joe configuration should be used IMO. Also the user MUST NOT have to care about stuff like that. It should just work.
                    Amen.

                    Also, Micheal does do tweaked benchmarks with all the optional things turned on. It's just that this time he didn't and found out that they fixed the ondemand.

                    Comment


                    • #60
                      Originally posted by AJSB View Post
                      It's only for testing purposes and to see how drivers are evolving...

                      Personally, as a gamer, i care less if any game can or not pass 60 FPS.
                      High frame rates can actually be bad...

                      I give an example...in "Enemy Territory : QUAKE Wars" i could have my hardware reaching easily 120 FPS...but i was having extremely difficulty to hit anyone before they hit me and kill me...the i read somewhere that gamers should try to keep the FPS as stable as possible....

                      I notice that my frame rate was actually jumping from 120 FPS down to as low as 80 FPS...so, in the .cfg file changing some settings i locked/limited (w/o using Vsync) frame rate to 60 FPS...

                      Then my Frame Rate changed from 59 FPS min to 61 FPS max...and suddenly i start to hit and kill enemies with ease

                      On a computer game you should always have v-sync on because of 3 reasons.
                      1. Rendering more frames than your monitor can display is pointless, as you don't get to see the extra frames.
                      2. Rendering more frames than your monitor can display can cause tearing.
                      3. Having v-sync on will let your GPU run cooler.

                      Comment

                      Working...
                      X