No announcement yet.

DragonEgg 3.0 Puts GCC & LLVM In One Bed

  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Michael View Post
    It's been said time and time again why upstream defaults are used... There's a generally safe assumption (although in the case of POVRay I am not sure what reasoning the developers have) that they know how to best tune their own code and have some reasoning to justify their defaults,
    This is bullshit, many code authors (which is what you mean by upstream here as I assume you build from official tarballs) ship their code with minimal optimization or ship with debug enabled, because they assume that the people building from source, be they users or distro packagers will apply their preferred optimization settings.

    So lets look at some of the packages in this 'benchmark':

    PovRay - If compiled for a 64bit platform it tunes it for K8, which means that in this case the winner is the one which does the worst job of tuning the code towards a K8 cpu. When compiling and tuning for the CPU it was actually run on, the results were reversed.

    x264 - By default it uses hand-optimized assembly for all the performance critical parts which means that the compilers have nothing to work on. This can be disabled with a simple './configure --disable-asm' which would then make the results meaningful.

    p7zip - Ships with -O, which translates to -O1, this clearly indicate that the author expects those building from source to apply their own optimization since performance is really poor at -O1 and there's no way distro packagers leave it at that for a program such as this where code performance really matters.

    So here we have tests which benchmarks K8 optimized code on Corei7, assembly-optimizations, and the lowest optimization level available -O1 (-O0 is NO optimization).

    Again these tests are totally misleading as compared to what users will end up running on their own machines since they either (MOST likely) get it from their distro repo (compiled by a packager who unlike Micheal most likely knows what he is doing) or building themselves from source which means they can just look in the README to see how to apply appropriate options.

    Originally posted by Michael View Post
    the tests are meant to be reproducible so that even if you fetch the source package outside of PTS
    If you declare the settings being used then people can reproduce them, and then the tests will actually be of use. We've had tests here where you compare binaries tuned for an entirely different architecture than they are being executed on, we've had tests where there's no performance-critical code left for the compiler to work on (x264 with assembly enabled), we've had tests where the upstream ships with -O0 (which is debug, no optimization), or -O1 which is absolute minimal optimization. These tests are a joke, doesn't matter if they can be reproduced directly from a upstream tarball when their default configuration makes them totally unsuitable for compiler performance comparisons. I mean seriously, if you take a 7 year old tarball release like PovRay 3.6.1 and don't actually look at the configure to verify that it's settings make any sense against compilers/cpu architectures of today then you really have no interest in meaningful results.

    Originally posted by Michael View Post
    regardless of any PTS-specific changes made someone is always going to bitch about an option, etc etc
    Certainly, but it's not like you are avoiding that now with this mess of a setup, instead you get the bitching AND a testsuite which is worthless. LLVM devs says it's bullshit, GCC devs says it's bullshit, users here on Phoronix says it's bullshit.


    • #12
      Originally posted by Smorg View Post
      It's never safe to assume. Upstream does whatever they feel like, including using hacked together custom makefiles which don't repsect your CFLAGS anyway. This is the challenge of packaging for distros.

      In the event that this is the case, it's still generally expected that the packager define -march/-mtune and possibly -On to be joined with whatever special secret sauce flags upstream wants (which may not be applied the same for every source file). It can't be assumed that these are set to something sane by default.
      I would like to elaborate on this. It is usually the case that upstream doesn't target a specific architecture or even a specific compiler. As a consequence, they expect package maintainers to pick sane values for these things. Very few upstream developers consider these settings to be untouchable and if they did, they wouldn't provide them for people to modify in the first place.
      Last edited by Shining Arcanine; 12-06-2011, 03:12 PM.