Announcement

Collapse
No announcement yet.

Radeon DRM GPU Driver Performance: Linux 3.4 To 3.8

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by marek View Post
    Unity is pretty exotic to me. It's not a WM tuned for high performance graphics and gaming. It's a good WM for everything else though.
    Since you're here, is there any way to "un-advertise" openGL 2.1 / 3.0 (to bump performance of Nexuiz) at runtime?

    Originally posted by Vim_User View Post
    The title of this article is "Radeon DRM GPU Driver Performance: Linux 3.4 to 3.8". It is meant to give an overview of performance improvements/regression between the different driver versions. How can this be of any use when a WM is used that possibly distorts the results, but nobody knows when and how this happens? When somebody wants to benchmark in regard to a specific topic, here driver versions, he has to make sure that all other things that influence the benchmark keep exactly the same, but you can't do that when using Unity.
    Therefore my conclusion that this benchmark is useless.
    Stop whinging and do your own bloody benchmark. The tools are out there. You could've ran it by the time you wrote all your pointless posts.

    Comment


    • #22
      Again, the point of the article is to compare performance of different driver versions in the different kernels. To do that Michael has chosen a setup that makes it impossible to do that in a reliable way.
      This makes the article pointless. Of course, PeterKraus, if you feel that making people aware of the faults in a measurement method is pointless then just don't read it and believe the unreliable data presented by Phoronix, instead of actually trying to make future benchmarks results more reliable.

      Comment


      • #23
        Originally posted by Vim_User View Post
        Again, the point of the article is to compare performance of different driver versions in the different kernels. To do that Michael has chosen a setup that makes it impossible to do that in a reliable way.
        This makes the article pointless. Of course, PeterKraus, if you feel that making people aware of the faults in a measurement method is pointless then just don't read it and believe the unreliable data presented by Phoronix, instead of actually trying to make future benchmarks results more reliable.
        Don't you go all bloody science-y on me.

        It's not a fault in "a measurement method", as the measurement of time (or frames per second) is reasonably precise and repeatable. There is no random error, as the significant errors of most of the measurements (apart from 1) are below 0.5 FPS. If anything, there might be a systematic error, but you are testing the whole system. Of course the kernel might affect the system performance, but THATS EXACTLY WHAT YOU ARE MEASURING.

        Now shut up.
        Last edited by PeterKraus; 04 February 2013, 10:45 AM.

        Comment


        • #24
          Originally posted by PeterKraus View Post
          Don't you go all bloody science-y on me.

          It's not a fault in "a measurement method", as the measurement of time (or frames per second) is reasonably precise and repeatable. There is no random error, as the significant errors of most of the measurements (apart from 1) are below 0.5 FPS. If anything, there might be a systematic error, but you are testing the whole system. Of course the kernel might affect the system performance, but THATS EXACTLY WHAT YOU ARE MEASURING.

          Now shut up.
          Marry me

          Comment


          • #25
            Originally posted by PeterKraus View Post
            Don't you go all bloody science-y on me.

            It's not a fault in "a measurement method", as the measurement of time (or frames per second) is reasonably precise and repeatable. There is no random error, as the significant errors of most of the measurements (apart from 1) are below 0.5 FPS. If anything, there might be a systematic error, but you are testing the whole system. Of course the kernel might affect the system performance, but THATS EXACTLY WHAT YOU ARE MEASURING.

            Now shut up.
            You didn't get the point at all, making me believe that you are incapable of even understanding what was pointed out. So do yourself a favor with not making you appear as fool and don't answer to posts you don't understand.

            Comment


            • #26
              Originally posted by Vim_User View Post
              You didn't get the point at all, making me believe that you are incapable of even understanding what was pointed out. So do yourself a favor with not making you appear as fool and don't answer to posts you don't understand.
              You do not seem to comprehend that the fact that newer kernel's expose different (versioned) interfaces to the radeon driver. This radeon driver then exposes a certain mesa version to it's user apps. This makes applications use newer OpenGL technique's that are either not optimized and / or take more effort from the card. Hence the # of FPS is decreasing.

              I understand your point. But strict technically: newer features also compose the performance of the kernel versions v3.4 through v3.8.

              Furthermore, I would like to give you an example of where your reasoning goes haywire: Zcomp. It's a new feature that introduces a performance increase if exposed. Now, what would it be:

              - disable feature: someone could say you are deliberately inhibiting the driver's performance
              - enable feature: someone else could say you are enabling new features which makes the test invalid since you changed more than the kernel version

              One way or another, both are arguments that relate to the validity of the test. HOWEVER, strict technically: only changing the kernel is testing with only changing the kernels. So this *includes* features being enabled / disabled and software behaving differently. Since these exact changes are part of these kernel versions. The fact that userspace acts differently does not matter, becuase userspace is the same piece of software for each testrun.

              This is where you and I are disagreeing: newer kernel contain newer features that should be included in the test. yes or no? I say yes, because newer kernel imply newer features.

              I quit now...

              Comment


              • #27
                I do not say that newer features should not be enabled if available, but since this is a test about changes in graphic driver performance only changes in the graphic driver should be considered.
                What I say is that the testbed should be chosen in a way that all that changes is the part that will be benchmarked. Anything else should be as equal as even possible. These benchmarks are specifically aimed at video driver performance, not overall system performance.

                But we do not know if the video driver is all that changed, since different available functions or a change in the implementation of already existing functions may have a positive or negative impact on Unity's performance regression, which is not recognizable with this specific test setup. Therefore a testbed should be chosen that does not contain sources of possible ambiguity.
                That is what I am saying, nothing more, nothing less, it just comes down to choose a WM without those performance problems and this benchmark is as good as it can be.

                Comment

                Working...
                X