Announcement

Collapse
No announcement yet.

A Look At The Intel Cascade Lake Performance For Windows Server 2019 vs. Linux vs. FreeBSD Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by kobblestown View Post
    The point is to generate clicks. No offense, that's how Michael makes his living.
    I really hope this is not the case, because these types of benchmarks are very interesting, if only they could be conducted in a manner that gave the results more meaning.

    Comment


    • #12
      Originally posted by aht0 View Post
      Then, you can reasonably assume that most people would just go with default values with only small subset going for custom settings
      I've heard this argument before, but looking at the results of these tests I find it hard to believe that they reflect the distro C/CXXFLAGS. I mentioned the FreeBSD LAME tests, one test uses -O3 and is ~70% (!) faster than the other FreeBSD test using GCC. Why is this particular test not using any optimization level at all ? It makes no sense, it's EVEN THE SAME FreeBSD version (12) and this is -O3 (is this the default for FreeBSD ?) in one test and (presumably) -O0 in the other. It is totally misleading.

      And I have strong scepticism regarding the x264 tests as well. x264 is heavily assembly optimized and really shouldn't be very dependent on compiler optimizations, yet the difference between the worst Linux (Ubuntu LTS) and the fastest (Clear) is something like 30%. As it stands I have such a hard time taking any of these results at face value, again apart from the Go results as there are no optimization settings to botch there, and it's a shame since these could be so informative if done in a way that made any sense.

      Also as far as I know these are all automated, so how hard could it be to pass/export the same optimization options to each one being tested ?

      Comment


      • #13
        Originally posted by Grinch View Post

        Why is this particular test not using any optimization level at all ? It makes no sense, it's EVEN THE SAME FreeBSD version (12) and this is -O3 (is this the default for FreeBSD ?) in one test and (presumably) -O0 in the other. It is totally misleading.
        I'll take a look after I get home, got 12-STABLE in laptop.

        Comment


        • #14
          Originally posted by aht0 View Post
          For one, this is living for Michael. So, re-compiling everything using custom settings would slow him down from publishing new articles tremendously (not to mention extra costs in electricity), which would mean deep dive for his income.

          Then, you can reasonably assume that most people would just go with default values with only small subset going for custom settings - custom settings he would be unlikely to precisely guess for more than even smaller subset of the latter folks. So it makes every sense to go at it as he has been. If somebody is really keen to test his settings on bunch of OSes or distros - he/she is free to do so, PTS is free.
          Well said. The thing about running benchmarks on open-source software is the nitpicks and complaints will never end. No matter what you do, someone is going to ask you to try something else. Hell, even for Windows benchmarks, people get all huffy about stuff like resolution and which programs/games are used during the tests.
          I think the way Michael has done testing is a happy medium: a modest variety of default configurations.

          But, perhaps Michael should allow people to remotely run PTS benchmarks on some of his systems that aren't queued up in his schedule for testing, so long as they provide an article that can be posted. That way it's a win-win: he gets more site content for free and people get to see test results for obscure configurations.

          Comment


          • #15
            Originally posted by Grinch View Post

            I've heard this argument before, but looking at the results of these tests I find it hard to believe that they reflect the distro C/CXXFLAGS. I mentioned the FreeBSD LAME tests, one test uses -O3 and is ~70% (!) faster than the other FreeBSD test using GCC. Why is this particular test not using any optimization level at all ? It makes no sense, it's EVEN THE SAME FreeBSD version (12) and this is -O3 (is this the default for FreeBSD ?) in one test and (presumably) -O0 in the other. It is totally misleading.

            And I have strong scepticism regarding the x264 tests as well. x264 is heavily assembly optimized and really shouldn't be very dependent on compiler optimizations, yet the difference between the worst Linux (Ubuntu LTS) and the fastest (Clear) is something like 30%. As it stands I have such a hard time taking any of these results at face value, again apart from the Go results as there are no optimization settings to botch there, and it's a shame since these could be so informative if done in a way that made any sense.

            Also as far as I know these are all automated, so how hard could it be to pass/export the same optimization options to each one being tested ?
            What's stopping you from re-running these tests the way you like them and sharing the results with us?

            Comment


            • #16
              Originally posted by jacob View Post

              It's praiseworthy indeed. But to be fair, although FreeBSD can't directly import GPL code from Linux, they no doubt watch and analyse carefully how some of the performance optimisations are done in Linux so they indirectly benefit from the billions $$$ invested into Linux too, except of course for some of the patent-encumbered algorithms like RCU.
              By all accounts FreeBSD *should* be slower.
              It uses Linux's version of Java in it's emulation layer?
              ZFS does a lot more work than standard filesystems.
              And as mentioned it has less development.

              And before you say "It benefited from the billions spent on Linux" (and there isn't anything wrong with that, a proper design is a proper design..) Why is it faster than Windows that can also do the same thing?

              I actually think FreeBSD development is just better planned out and more focused. It's driven by the core team with a focus.. not randos throwing stuff at the wall. There is also less of a focus on the desktop there. "Boot times? We don't need no stinking fast boot times!" lol

              Comment


              • #17
                Looks like this is the arena where openSUSE shows a little value, with 11 benchmarks finishing in the top two. Probably never going to default to be a top-flight video game rig or serious multimedia desktop competitor. But I don't think that's what their goal is.

                Comment


                • #18
                  But, perhaps Michael should allow people to remotely run PTS benchmarks on some of his systems that aren't queued up in his schedule for testing, so long as they provide an article that can be posted. That way it's a win-win: he gets more site content for free and people get to see test results for obscure configurations.
                  schmidtbag that would be great. I wanted to see postfix, postgres, and Manjaro in this test.

                  Comment


                  • #19
                    Originally posted by Vistaus View Post
                    What's stopping you from re-running these tests the way you like them and sharing the results with us?
                    The time required to set all these systems up, and also dissecting this benchmarking suite enough to improve on it. I am very interested in seeing meaningful results for this kind of wide-range benchmark suite, but not THAT interested that I would spend a large chunk of my very limited spare-time doing so.

                    Micheal on the other hand does this for a living (and I'm not belittling his burden), and he has done great tests before which I have praised, like the one where he compared a lot of packages compiled with -O2 versus -O3, it was good methodology and therefore VERY informative.

                    This however is all over the place, the excuse I've seen is that he is using distro-flags (which I think should then be listed at the beginning in order to give proper context to these tests), but that doesn't hold water since just by looking at the FreeBSD benchmarks, they are sometimes compiled with -O2 and other times with -O3, and not even using the same options for the same FreeBSD versions in the same test (!) as in the LAME benchmark.

                    It's a shame, because the benchmarks themselves are very interesting, if only they could be done in a way that gave the results actual meaning.

                    Comment


                    • #20
                      Originally posted by Grinch View Post
                      I don't get the point of these tests when so many variables differ. If you are not even using the same compiler options for these benchmarks, it can say pretty much nothing about the underlying OS performance ...

                      Again, what is the point of these tests ?
                      You're not supposed to be comparing the open source OSes against each other. You're only supposed to be concerned with whether any of them are beating Windows or not. And since they are - handily - and my calculation suggests that the price to run Windows Server 2019 standard on a 2 CPU, 56 core system as tested here is about $3,400 USD, you're supposed to print this article out and wave it in the face of all your Windows-using friends. Seriously.

                      Comment

                      Working...
                      X