Announcement

Collapse
No announcement yet.

The Performance Impact To AMD Zen 2 Compiler Tuning On GCC 9 + Znver2

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • The Performance Impact To AMD Zen 2 Compiler Tuning On GCC 9 + Znver2

    Phoronix: The Performance Impact To AMD Zen 2 Compiler Tuning On GCC 9 + Znver2

    One of the areas that I always have "fun" benchmarking for new CPU launches is looking at the compiler performance. Following the recent Ryzen 3000 series launch I carried out some initial benchmarks looking at the current Zen 2 performance using the newest GCC 9 stable series with its "znver2" optimizations. Here is a look at how the Znver2 optimizations work out when running some benchmarks on the optimized binaries with a Ryzen 9 3900X running Ubuntu 18.04 LTS.

    http://www.phoronix.com/vr.php?view=28054

  • #2
    I asked this before and got no answer.

    So, once again - what is the reason to use the geometric mean instead of the arithmetic one?

    Hint: If you're not sure why, it's certainly wrong.

    Comment


    • #3
      Originally posted by entropy View Post
      I asked this before and got no answer.

      So, once again - what is the reason to use the
      geometric mean instead of the
      instead of the what? Geometric mean is best to be used in cases of having the different units in the tests as is the case for these many different compiler benchmarks.
      Michael Larabel
      http://www.michaellarabel.com/

      Comment


      • #4
        Originally posted by Michael View Post

        instead of the what? Geometric mean is best to be used in cases of having the different units in the tests as is the case for these many different compiler benchmarks.
        See above. I hit enter to early.

        Comment


        • #5
          Originally posted by entropy View Post

          See above. I hit enter to early.
          arithmetic mean would be inaccurate when there is different scales / units of measurement involved. Such as some benchmarks reporting their result in the thousands (MB/s) and other tests in single digits (seconds) that doing a straight-average of them lead to wildly inaccurate overview.
          Michael Larabel
          http://www.michaellarabel.com/

          Comment


          • #6
            Originally posted by Michael View Post

            arithmetic mean would be inaccurate when there is different scales / units of measurement involved. Such as some benchmarks reporting their result in the thousands (MB/s) and other tests in single digits (seconds) that doing a straight-average of them lead to wildly inaccurate overview.
            And as explained more elegantly via Wikipedia:

            A geometric mean is often used when comparing different items—finding a single "figure of merit" for these items—when each item has multiple properties that have different numeric ranges.[3] For example, the geometric mean can give a meaningful value to compare two companies which are each rated at 0 to 5 for their environmental sustainability, and are rated at 0 to 100 for their financial viability. If an arithmetic mean were used instead of a geometric mean, the financial viability would have greater weight because its numeric range is larger. That is, a small percentage change in the financial rating (e.g. going from 80 to 90) makes a much larger difference in the arithmetic mean than a large percentage change in environmental sustainability (e.g. going from 2 to 5). The use of a geometric mean normalizes the differently-ranged values, meaning a given percentage change in any of the properties has the same effect on the geometric mean. So, a 20% change in environmental sustainability from 4 to 4.8 has the same effect on the geometric mean as a 20% change in financial viability from 60 to 72.
            Michael Larabel
            http://www.michaellarabel.com/

            Comment


            • #7
              arithmetic mean gives more wieght to larger objects. So if one benchmark emits seriously large numbers, it is going to dominate and turn everything else into noise. It is surely the worst to use.
              Harmonic mean is great for a single velocity metric. e.g. kph. It gives the small values more weight. So great for determining total time equivalent. Its use in when everything is fps is only correct when every frame needs to be done. e.g. avg media encoding speed. Not so valid for game fps as slower rendering means more skipped frames.
              Geometric mean is a pretty reasonable alternative for different scales. And, honestly, I think the only viable option for most benchmarks.

              Comment


              • #8
                Originally posted by grigi View Post
                arithmetic mean gives more wieght to larger objects. So if one benchmark emits seriously large numbers, it is going to dominate and turn everything else into noise. It is surely the worst to use.
                Harmonic mean is great for a single velocity metric. e.g. kph. It gives the small values more weight. So great for determining total time equivalent. Its use in when everything is fps is only correct when every frame needs to be done. e.g. avg media encoding speed. Not so valid for game fps as slower rendering means more skipped frames.
                Geometric mean is a pretty reasonable alternative for different scales. And, honestly, I think the only viable option for most benchmarks.
                Sure, the arithmetic mean is not an robust estimator.
                Like the geometric mean isn't one. Take a benchmark that emits a seriously small number. It is going to dominate.

                Comment


                • #9
                  Originally posted by entropy View Post

                  Sure, the arithmetic mean is not an robust estimator.
                  Like the geometric mean isn't one. Take a benchmark that emits a seriously small number. It is going to dominate.
                  and as always it always depends on the tests. Is there one with small numbers? No, there isn't......

                  Comment


                  • #10
                    Well skipping that math lessons, very interesting results. However should we not be comparing against 86-x64 and not so much zen version1? I ask because I’m imagining that most distros will not do much more than request 86-x64 optimizations.

                    It it will be interesting to see how these numbers compare with more mature compiler tech in a year, Maybe even compare performance against LLVM/CLang. I’m actually surprised that optimization did as well as it did this early in the game.

                    Comment

                    Working...
                    X