Announcement

Collapse
No announcement yet.

A Look At The Intel Cascade Lake Performance For Windows Server 2019 vs. Linux vs. FreeBSD Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • aht0
    replied
    Originally posted by Grinch View Post
    Ok, so judging by that, FreeBSD default flags are '-O2 -fstack-protector -fno-strict-aliasing', with LAME's default optimization being -O3, thanks for the info!
    Tests are now worthless anyway. I upgraded my 12-STABLE and it seems FreeBSD's system compiler has been bumped from LLVM 6 to all the way to LLVM 8 sometime over the past few weeks. Default flags remaining same ain't guaranteed either. Haven't had time to rebuild ports yet, so I can't add much atm
    Last edited by aht0; 04-27-2019, 09:48 AM.

    Leave a comment:


  • Grinch
    replied
    Originally posted by alcalde View Post
    Since that was the question being asked, the accuracy of comparisons between Linux and BSD versions wasn't controlled for as this wasn't relevant.
    Of course it's relevant, if there is no accuracy between the Linux and BSD's, there's no accuracy between Linux vs Windows or BSD vs Windows either.

    Leave a comment:


  • Grinch
    replied
    Originally posted by aht0 View Post
    Code:
    cc -DHAVE_CONFIG_H -I. -I.. -I../libmp3lame -I../include -I.. -DLIBICONV_PLUG -O3 -Wall -pipe -O2 -pipe -DLIBICONV_PLUG -fstack-protector -fno-strict-aliasing
    Ok, so judging by that, FreeBSD default flags are '-O2 -fstack-protector -fno-strict-aliasing', with LAME's default optimization being -O3, thanks for the info!

    Leave a comment:


  • alcalde
    replied
    Originally posted by Grinch View Post

    As of now it's just a big mess of different os'es running the same benchmarks but with very different compiler optimization settings, making it pretty much impossible to draw any worthwhile conclusions, a shame since these kind of benchmarks are very interesting (imo).
    You're still missing the point. The question that was being asked by the article's author was: which is faster - Windows Server or Linux? Since that was the question being asked, the accuracy of comparisons between Linux and BSD versions wasn't controlled for as this wasn't relevant. All that mattered was was there a distro that was faster than Windows in a particular benchmark, and if so by how much. Wait for an article comparing Linux Vs. BSD to get the answer you're looking for. In those comparisons you're going to find a lot more normalization going on between distros.

    Leave a comment:


  • aht0
    replied
    Originally posted by Grinch View Post
    I don't get the point of these tests when so many variables differ. If you are not even using the same compiler options for these benchmarks, it can say pretty much nothing about the underlying OS performance and is essentially just a comparison between different compiler optimization levels.

    For example, LAME encoding, FreeBSD 12 GCC lists '-lncurses -liconv' and no optimization level, meaning the default will be used, which is -O0, as in practically zero optimization. Then the other FreeBSD LAME encoding benchmark uses '-O3 -pipe -lncurses', as in the highest optimization level. It makes no sense. This is such poor methodology that I find the results apart from the Go benchmarks (which doesn't have any optimization options) pretty much worthless.

    Again, what is the point of these tests ?
    Sorry for delay, had already forgotten it. One copied row from compiling audio/lame
    Code:
    cc -DHAVE_CONFIG_H  -I. -I.. -I../libmp3lame -I../include -I..  -DLIBICONV_PLUG  -O3 -Wall -pipe -O2 -pipe  -DLIBICONV_PLUG -fstack-protector -fno-strict-aliasing   -I/usr/local/include   -MT rtp.o -MD -MP -MF .deps/rtp.Tpo -c -o rtp.o rtp.c
    Last edited by aht0; 04-26-2019, 01:19 AM.

    Leave a comment:


  • Grinch
    replied
    Originally posted by alcalde View Post
    Linux vs. FreeBSD Benchmarks".
    This is what made the article interesting for me.

    And as for 'not a focus on compiler settings', the way this test was set up, compiler settings was most likely the dominant factor in the results we got, which in turn was the basis of my criticism. If the same compiler settings had been used for all tested OSes, we would see the actual impact of the underlying OS performance.

    As of now it's just a big mess of different os'es running the same benchmarks but with very different compiler optimization settings, making it pretty much impossible to draw any worthwhile conclusions, a shame since these kind of benchmarks are very interesting (imo).

    The exceptions were the Go benchmarks which doesn't have optimization levels that can be botched in the benchmarks, thus the results are actually meaningful in a OS performance comparison (which this was). I just wish the other tests could have been meaningful as well.

    Leave a comment:


  • alcalde
    replied
    Originally posted by Grinch View Post

    Well, as someone who has been using open source OSes 24/7 for the past 15+ years, I'm interested in what performance benefits (if any) they have against eachother, as it could inform a potential switch. Seeing Windows being beaten or not is of no concern to me, as I'm not interested in using it either way.
    If you're not concerned about Windows, you might not want to read an article titled "A Look At The Intel Cascade Lake Performance For Windows Server 2019 vs. Linux vs. FreeBSD Benchmarks". The point of the article was to compare Windows vs. Linux, not Linuxes against each other, hence there wasn't a focus on compiler settings.
    Last edited by alcalde; 04-25-2019, 06:18 PM.

    Leave a comment:


  • Grinch
    replied
    Originally posted by alcalde View Post
    You're not supposed to be comparing the open source OSes against each other. You're only supposed to be concerned with whether any of them are beating Windows or not.
    Well, as someone who has been using open source OSes 24/7 for the past 15+ years, I'm interested in what performance benefits (if any) they have against eachother, as it could inform a potential switch. Seeing Windows being beaten or not is of no concern to me, as I'm not interested in using it either way.

    Leave a comment:


  • alcalde
    replied
    Originally posted by Grinch View Post
    I don't get the point of these tests when so many variables differ. If you are not even using the same compiler options for these benchmarks, it can say pretty much nothing about the underlying OS performance ...

    Again, what is the point of these tests ?
    You're not supposed to be comparing the open source OSes against each other. You're only supposed to be concerned with whether any of them are beating Windows or not. And since they are - handily - and my calculation suggests that the price to run Windows Server 2019 standard on a 2 CPU, 56 core system as tested here is about $3,400 USD, you're supposed to print this article out and wave it in the face of all your Windows-using friends. Seriously.

    Leave a comment:


  • Grinch
    replied
    Originally posted by Vistaus View Post
    What's stopping you from re-running these tests the way you like them and sharing the results with us?
    The time required to set all these systems up, and also dissecting this benchmarking suite enough to improve on it. I am very interested in seeing meaningful results for this kind of wide-range benchmark suite, but not THAT interested that I would spend a large chunk of my very limited spare-time doing so.

    Micheal on the other hand does this for a living (and I'm not belittling his burden), and he has done great tests before which I have praised, like the one where he compared a lot of packages compiled with -O2 versus -O3, it was good methodology and therefore VERY informative.

    This however is all over the place, the excuse I've seen is that he is using distro-flags (which I think should then be listed at the beginning in order to give proper context to these tests), but that doesn't hold water since just by looking at the FreeBSD benchmarks, they are sometimes compiled with -O2 and other times with -O3, and not even using the same options for the same FreeBSD versions in the same test (!) as in the LAME benchmark.

    It's a shame, because the benchmarks themselves are very interesting, if only they could be done in a way that gave the results actual meaning.

    Leave a comment:

Working...
X