Originally posted by kobblestown
View Post
Announcement
Collapse
No announcement yet.
A Look At The Intel Cascade Lake Performance For Windows Server 2019 vs. Linux vs. FreeBSD Benchmarks
Collapse
X
-
Originally posted by aht0 View PostThen, you can reasonably assume that most people would just go with default values with only small subset going for custom settings
And I have strong scepticism regarding the x264 tests as well. x264 is heavily assembly optimized and really shouldn't be very dependent on compiler optimizations, yet the difference between the worst Linux (Ubuntu LTS) and the fastest (Clear) is something like 30%. As it stands I have such a hard time taking any of these results at face value, again apart from the Go results as there are no optimization settings to botch there, and it's a shame since these could be so informative if done in a way that made any sense.
Also as far as I know these are all automated, so how hard could it be to pass/export the same optimization options to each one being tested ?
- Likes 3
Comment
-
Originally posted by Grinch View Post
Why is this particular test not using any optimization level at all ? It makes no sense, it's EVEN THE SAME FreeBSD version (12) and this is -O3 (is this the default for FreeBSD ?) in one test and (presumably) -O0 in the other. It is totally misleading.
- Likes 3
Comment
-
Originally posted by aht0 View PostFor one, this is living for Michael. So, re-compiling everything using custom settings would slow him down from publishing new articles tremendously (not to mention extra costs in electricity), which would mean deep dive for his income.
Then, you can reasonably assume that most people would just go with default values with only small subset going for custom settings - custom settings he would be unlikely to precisely guess for more than even smaller subset of the latter folks. So it makes every sense to go at it as he has been. If somebody is really keen to test his settings on bunch of OSes or distros - he/she is free to do so, PTS is free.
I think the way Michael has done testing is a happy medium: a modest variety of default configurations.
But, perhaps Michael should allow people to remotely run PTS benchmarks on some of his systems that aren't queued up in his schedule for testing, so long as they provide an article that can be posted. That way it's a win-win: he gets more site content for free and people get to see test results for obscure configurations.
- Likes 3
Comment
-
Originally posted by Grinch View Post
I've heard this argument before, but looking at the results of these tests I find it hard to believe that they reflect the distro C/CXXFLAGS. I mentioned the FreeBSD LAME tests, one test uses -O3 and is ~70% (!) faster than the other FreeBSD test using GCC. Why is this particular test not using any optimization level at all ? It makes no sense, it's EVEN THE SAME FreeBSD version (12) and this is -O3 (is this the default for FreeBSD ?) in one test and (presumably) -O0 in the other. It is totally misleading.
And I have strong scepticism regarding the x264 tests as well. x264 is heavily assembly optimized and really shouldn't be very dependent on compiler optimizations, yet the difference between the worst Linux (Ubuntu LTS) and the fastest (Clear) is something like 30%. As it stands I have such a hard time taking any of these results at face value, again apart from the Go results as there are no optimization settings to botch there, and it's a shame since these could be so informative if done in a way that made any sense.
Also as far as I know these are all automated, so how hard could it be to pass/export the same optimization options to each one being tested ?
Comment
-
Originally posted by jacob View Post
It's praiseworthy indeed. But to be fair, although FreeBSD can't directly import GPL code from Linux, they no doubt watch and analyse carefully how some of the performance optimisations are done in Linux so they indirectly benefit from the billions $$$ invested into Linux too, except of course for some of the patent-encumbered algorithms like RCU.
It uses Linux's version of Java in it's emulation layer?
ZFS does a lot more work than standard filesystems.
And as mentioned it has less development.
And before you say "It benefited from the billions spent on Linux" (and there isn't anything wrong with that, a proper design is a proper design..) Why is it faster than Windows that can also do the same thing?
I actually think FreeBSD development is just better planned out and more focused. It's driven by the core team with a focus.. not randos throwing stuff at the wall. There is also less of a focus on the desktop there. "Boot times? We don't need no stinking fast boot times!" lol
- Likes 2
Comment
-
But, perhaps Michael should allow people to remotely run PTS benchmarks on some of his systems that aren't queued up in his schedule for testing, so long as they provide an article that can be posted. That way it's a win-win: he gets more site content for free and people get to see test results for obscure configurations.
Comment
-
Originally posted by Vistaus View PostWhat's stopping you from re-running these tests the way you like them and sharing the results with us?
Micheal on the other hand does this for a living (and I'm not belittling his burden), and he has done great tests before which I have praised, like the one where he compared a lot of packages compiled with -O2 versus -O3, it was good methodology and therefore VERY informative.
This however is all over the place, the excuse I've seen is that he is using distro-flags (which I think should then be listed at the beginning in order to give proper context to these tests), but that doesn't hold water since just by looking at the FreeBSD benchmarks, they are sometimes compiled with -O2 and other times with -O3, and not even using the same options for the same FreeBSD versions in the same test (!) as in the LAME benchmark.
It's a shame, because the benchmarks themselves are very interesting, if only they could be done in a way that gave the results actual meaning.
- Likes 1
Comment
-
Originally posted by Grinch View PostI don't get the point of these tests when so many variables differ. If you are not even using the same compiler options for these benchmarks, it can say pretty much nothing about the underlying OS performance ...
Again, what is the point of these tests ?
Comment
Comment