Announcement

Collapse
No announcement yet.

Here's Why Radeon Graphics Are Faster On Linux 3.12

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by marek View Post
    I second this. People who are serious about performance should always set the "performance" governor. I even removed the file /etc/init.d/ondemand, so that I always get "performance" on boot. The init.d script sets "ondemand" after 1 minute or so after boot, i.e. when you least expect it, which messes up benchmark results. If you play games with the "ondemand" governor and complain about performance to us, you're wasting our time.

    While it's a good thing that we know what the problem was, it's also a big fail for Phoronix. "ondemand" should never ever be used for benchmarking.
    The testing is about the default for the distribution that most people utilize.
    Michael Larabel
    https://www.michaellarabel.com/

    Comment


    • #22
      So why does ondemand still exist, anyway? So far it seems that it's always beneficial to sleep as long as possible, which means you're always either sleeping, or running at maximum frequency so you could get back to sleep as soon as possible. The only exceptions that I could see is for CPUs that don't support sleeping at all, and maybe also those that have specific temperature requirements.

      Comment


      • #23
        Definitely a welcome change!

        It was clearly a bug, with "ondemand" policy not being able to distinguish between right loads.
        Between single core critical load over short time (like game engine pumping textures) and multiple core irrelevant load (like flash player bug).

        It was discovered several months ago. It was proven many times. Any light CPU load (light or optimized game engine) would keep CPU thinking the load is too irrelevant to do fast.

        Switch to "performance" was unnecessary, as one could tune "ondemand" into acceptable behaviour, permanently. See: http://phoronix.com/forums/showthrea...esa-perfomance

        One could tune the "ondemand" simply by increasing time it reconsiders switching its state (thus flattening the curve), and increasing minimum load at which it considers the load to be of relevance:
        # echo 150 > /sys/devices/system/cpu/cpufreq/ondemand/sampling_down_factor
        # echo 35 > /sys/devices/system/cpu/cpufreq/ondemand/up_threshold

        .. or set this in sysctrl.conf permanently:
        devices/system/cpu/cpufreq/ondemand/up_threshold = 50
        devices/system/cpu/cpufreq/ondemand/sampling_down_factor = 10

        devices/system/cpu/cpu0/cpufreq/scaling_governor = ondemand
        devices/system/cpu/cpu1/cpufreq/scaling_governor = ondemand
        devices/system/cpu/cpu2/cpufreq/scaling_governor = ondemand
        devices/system/cpu/cpu3/cpufreq/scaling_governor = ondemand


        10 pages of trolling from user sdack, with shader optimized developer Vadim Grilin himself confirming it - and now its confirmed and fixed. Probably... Its not even sufficiently tested, yet.

        ---

        To test if this patch fixes the problem, check this cases:
        Note, cases 1 vs 2 are proven to benefit 3D.


        Case 1:
        Pre-patch kernel+MESA. Cpu governor - ondemand, untuned.

        Case 2:
        Pre-patched kernel+ MESA. Cpu governor - ondemand, tuned.

        Case 3:
        Patched kernel. Cpu governor - ondemand, untuned.

        Case 4:
        Patched or unpatched kernel. Cpu governor - performance.

        This will prove IF the 3D really benefited by this patch. If tuning ondemand benefits against unpatched versus performance (it does). And if there is any sense in ondemand tuning since latest patch.

        Comment


        • #24
          Originally posted by marek View Post
          I second this. People who are serious about performance should always set the "performance" governor. I even removed the file /etc/init.d/ondemand, so that I always get "performance" on boot. The init.d script sets "ondemand" after 1 minute or so after boot, i.e. when you least expect it, which messes up benchmark results. If you play games with the "ondemand" governor and complain about performance to us, you're wasting our time.

          While it's a good thing that we know what the problem was, it's also a big fail for Phoronix. "ondemand" should never ever be used for benchmarking.
          No way. Games, drivers or operating system in general should handle this kind of tweaking without bothering users. CPU and GPU frequency scaling should "just work".

          I'm not going to tell my little sister that she has to change the cpufreq governor to play games.

          Comment


          • #25
            Originally posted by Pontostroy View Post
            Also relevant: Linux's "Ondemand" Governor Is No Longer Fit

            It seems distributions should be switching to the "Intel P-State" driver, at least on Intel hardware.

            Comment


            • #26
              Actually, looking at it, the fact that the tests indicate any change at all is actually thanks to a bug in Ubuntu. It prevents the Intel processor that Michael used for testing from using the intel_pstate driver ? which has, and needs, no ondemand governor. It was supposed to be disabled for a bit until a bug would be fixed upstream, and it did get fixed, yet Ubuntu maintainers never enabled it back... See this bug report for more information: https://bugs.launchpad.net/ubuntu/+s...x/+bug/1188647

              Comment


              • #27
                What a FAIL . Michael forget to turn off ondemand, but he know how to git bisect and write some things .

                Comment


                • #28
                  Originally posted by Petteri View Post
                  No way. Games, drivers or operating system in general should handle this kind of tweaking without bothering users. CPU and GPU frequency scaling should "just work".

                  I'm not going to tell my little sister that she has to change the cpufreq governor to play games.
                  Of course for normal users it should "just work", but for benchmarking the question arises - "what are you trying to measure?" Do you want to measure the limits of the hardware, or the limits of the software, or both? If you want to measure the performance of a default distribution, then using the default CPU governor makes sense. If you want to measure what the hardware is capable of, then using the performance governor makes sense.

                  In this case, I'd say using the default governor makes sense, but it also makes sense for the distributions to consider changing the default governor to something more modern.

                  Comment


                  • #29
                    Originally posted by 89c51 View Post
                    Average joe configuration should be used IMO. Also the user MUST NOT have to care about stuff like that. It should just work.
                    You can use that, but don't bother me with the benchmark results, because as a driver developer I am not interested. It's not only the graphics driver being benchmarked here, it's also cpufreq. And who knows what cpufreq will do next time.

                    The CPU driver overhead varies depending on the GPU. For example, if a fast GPU has lower driver overhead than a slow GPU, cpufreq will underclock the CPU more for the fast GPU, thus making the fast GPU look slower in the end. This can happen with the Cayman GPU with virtual memory enabled, because the kernel CS checker is skipped, saving a lot of CPU time. This will also happen with Southern Islands and later GPUs, which use a different driver with different CPU overhead, therefore cpufreq might behave differently.

                    Comment


                    • #30
                      My guess is confirmed. That's fun.

                      Comment

                      Working...
                      X