Originally posted by XorEaxEax
View Post
Announcement
Collapse
No announcement yet.
FreeBSD: A Faster Platform For Linux Gaming Than Linux?
Collapse
X
-
Originally posted by deanjo View PostYou know that is complete BS. In most cases if there is a difference it is marginal at best and that is on a "properly configured" gentoo if there is any difference at all. It's more like a case of Gentoo may win in 5% of tests by a margin of less then 3%. In a case of PTS where it downloads and compiles a majority of it's tests on every distro that margin becomes even smaller. Gentoo may had an edge a long time ago when distros were dealing with multiple brands and generations of 32 bit processors but those days are long gone.
I supposed that was easy to comprehend from my previous posts.
Anyway, in my laptop and desktop I installed Gentoo and Ubuntu. In the vast majority of tests Gentoo was ahead by 3-10% than Ubuntu. In games the difference was even bigger especially in the desktop where I have Nvidia.
Finally, I kept Gentoo in my Desktop for reasons of performance, control, customization and stability while in my laptop I kept Ubuntu for easy of use.
Comment
-
Originally posted by yotambien View PostAnyway, if some random Gentoo user wants to keep singing the same old same old, at least be honest and word it like something around the lines of
Gentoo is the fastest distro in the solar system by a 1337%(*)
(*)Terms and Conditions apply. As compared to a standard broken Ubuntu installation on a limited number of tests using a custom version of GCC doped with 0.02% extract of lizard liver and 2 ppm virgin blood.
Comment
-
Originally posted by crazycheese View Postsimply by the fact, code was optimized for host processor.
Comment
-
Originally posted by deanjo View PostAnd that is where your reasoning fails. A vast majority of applications and libraries out there do not contain any specific code for the extended sets. Recompiling an app on a processor that has SSE 4 for example does not automagically make the application support SSE4.
Also, for code which actually uses the compiler to generate explicit SIMD code (using vector types, not assembly/insintrics), making sure that you define the highest SSE instruction set your cpu supports enables the compiler to generate better code. Same goes for countless other instruction performance differences that exists between different versions of x86/x64 cpu's, not to mention things like cache optimization which certainly varies from cpu to cpu. So yes there are alot of possible gain to be had here, HOWEVER i'd say it really only makes a 'real' (as in user-noticeable) difference on 'heavy' applications (assuming that they don't already rely on hand-tuned assembly which is sometimes the case with these types of apps). Also, the heuristics concerning these optimizations are VERY difficult for the compiler, so for the best results you generally need to compile using profile feedback.
As for the Linux kernel, I've compiled it myself with flags according to my hardware but I can't say I've noticed/measured any change in performance. This however is not very surprising given that A) a kernel is designed to be very low-latency B) the kernel is heavily using compiler-extensions which control things like cache usage and branch prediction and overrides any compiler optimizations regarding this (which is a good thing given that the devs have carefully timed the code to perform as good as possible).
Bottom line, -march=native can make a worthwile difference for heavy code which you use alot, recompiling every package in your distro, ehh.. well it's your spare-time I suppose
- Likes 1
Comment
-
Originally posted by deanjo View PostAnd that is where your reasoning fails. A vast majority of applications and libraries out there do not contain any specific code for the extended sets. Recompiling an app on a processor that has SSE 4 for example does not automagically make the application support SSE4. Most applications out there may have common extended set support such as SSE2 (which is present in all x64 processors). There are a few apps out there that will take advantage (openSSL and GCM for example) and those are easily enough recompiled on any distro to take advantage of the extra support which happens to be what PTS does on the majority of it's tests.
Of course all written by you is correct.
There is big amount of managed/interpretated code, making only static parser(if static) the valid target for optimization.
There is possibility, and it is used, by static compiled code to detect current CPU registers and activate assembly accelerated code.
Modern chips are much cleverer that before and can execute out-of-order, optimize, have huge cache etc.
But my reasoning does not fail, please examine your original claim about ideal situation with ideal gentoo and ideal other distribution - both of ideal states. They are not.
Gentoo does not have that low-jump-in barrier, as ubuntu. As of this, the incoming flow of userbase is much shorter. But the community is more professional, except that canonical is company and also hires more-less professionals, which neglects the difference.
Not everyone is ready or understands the purpose to recompile everything for the job. This is another blow to gentoo userbase.
So in end-game there are more people maintaining or simply overlooking ubuntu/debian packages, which contributes to package quality. Which means gentoo becomes less ideal and its dynamic optimizations may loose even to more polished distro static optimizations.
For the part of dynamically executed code - gentoo does not compile it, so its installation is as fast as on binary distro.
For the part of static code detecting cpu instructions - they are not common, mplayer and other codecs.. so gentoo has theoretical advantage here.
The self-optimizing cpus can only optimize to certain level, which means, if compiler is well written for both generic code and specific code, gentoo will still have advantage with completely random value.
Thats why in ideal case you described, gentoo will always win.
But in reality, availability to userbase and own gentoo political problems lead to it suffering less quality control and less revisions, making it loose.
Thats is why in reality speed is not gentoo key factor, but if polished, it will be; the keyfactor of gentoo is portage and I have written about its possibilities. This is where even APT will loose hands down. People are not using gentoo for speed but for customization abilities, one of which is recompilation which may be used to get optimized code for local cpu.
Comment
-
Originally posted by XorEaxEax View PostBottom line, -march=native can make a worthwile difference for heavy code which you use alot, recompiling every package in your distro, ehh.. well it's your spare-time I suppose
Comment
-
Interesting benchmarks.
However there is still question is this is really difference in kernel implementation and algorithms or other factors.
First, one should use same window manager (something as simple as fluxbox or same version of xfce without compositing).
Second, compiler versions and compiler options, for example used -mcpu -mtune ( it looks like PC-BSD will work on 486, so probably use -mcpu=486 in most of the places, however -mtune can greatly affect perfromance for pentium's and latter), etc.
There may be many other variables too be taken into account here. And even after that it still can be hard to explain difference.
Comment
Comment