Originally posted by alcalde
View Post
Announcement
Collapse
No announcement yet.
A Look At The Intel Cascade Lake Performance For Windows Server 2019 vs. Linux vs. FreeBSD Benchmarks
Collapse
X
-
-
Originally posted by Grinch View Post
Well, as someone who has been using open source OSes 24/7 for the past 15+ years, I'm interested in what performance benefits (if any) they have against eachother, as it could inform a potential switch. Seeing Windows being beaten or not is of no concern to me, as I'm not interested in using it either way.
Last edited by alcalde; 25 April 2019, 06:18 PM.
Comment
-
Originally posted by alcalde View PostLinux vs. FreeBSD Benchmarks".
And as for 'not a focus on compiler settings', the way this test was set up, compiler settings was most likely the dominant factor in the results we got, which in turn was the basis of my criticism. If the same compiler settings had been used for all tested OSes, we would see the actual impact of the underlying OS performance.
As of now it's just a big mess of different os'es running the same benchmarks but with very different compiler optimization settings, making it pretty much impossible to draw any worthwhile conclusions, a shame since these kind of benchmarks are very interesting (imo).
The exceptions were the Go benchmarks which doesn't have optimization levels that can be botched in the benchmarks, thus the results are actually meaningful in a OS performance comparison (which this was). I just wish the other tests could have been meaningful as well.
- Likes 1
Comment
-
Originally posted by Grinch View PostI don't get the point of these tests when so many variables differ. If you are not even using the same compiler options for these benchmarks, it can say pretty much nothing about the underlying OS performance and is essentially just a comparison between different compiler optimization levels.
For example, LAME encoding, FreeBSD 12 GCC lists '-lncurses -liconv' and no optimization level, meaning the default will be used, which is -O0, as in practically zero optimization. Then the other FreeBSD LAME encoding benchmark uses '-O3 -pipe -lncurses', as in the highest optimization level. It makes no sense. This is such poor methodology that I find the results apart from the Go benchmarks (which doesn't have any optimization options) pretty much worthless.
Again, what is the point of these tests ?
Code:cc -DHAVE_CONFIG_H -I. -I.. -I../libmp3lame -I../include -I.. -DLIBICONV_PLUG -O3 -Wall -pipe -O2 -pipe -DLIBICONV_PLUG -fstack-protector -fno-strict-aliasing -I/usr/local/include -MT rtp.o -MD -MP -MF .deps/rtp.Tpo -c -o rtp.o rtp.c
Last edited by aht0; 26 April 2019, 01:19 AM.
- Likes 1
Comment
-
Originally posted by Grinch View Post
As of now it's just a big mess of different os'es running the same benchmarks but with very different compiler optimization settings, making it pretty much impossible to draw any worthwhile conclusions, a shame since these kind of benchmarks are very interesting (imo).
Comment
-
Originally posted by aht0 View PostCode:cc -DHAVE_CONFIG_H -I. -I.. -I../libmp3lame -I../include -I.. -DLIBICONV_PLUG -O3 -Wall -pipe -O2 -pipe -DLIBICONV_PLUG -fstack-protector -fno-strict-aliasing
Comment
-
Originally posted by alcalde View PostSince that was the question being asked, the accuracy of comparisons between Linux and BSD versions wasn't controlled for as this wasn't relevant.
Comment
-
Originally posted by Grinch View PostOk, so judging by that, FreeBSD default flags are '-O2 -fstack-protector -fno-strict-aliasing', with LAME's default optimization being -O3, thanks for the info!Last edited by aht0; 27 April 2019, 09:48 AM.
Comment
Comment