Announcement

Collapse
No announcement yet.

Clear Linux Continues To Maintain Slight Graphics Lead Over Ubuntu 16.10

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • edwaleni
    replied
    Originally posted by damg View Post

    From the compiler flags, it seems they are using different BLAS library implementations. OpenBLAS appears to be much faster than CBLAS in this case.
    Good catch. OpenBLAS is supposed to match or exceed Intel MKL performance in SIMD based ops. If Intel is using that lib, that says something for OpenBLAS. Of course while Intel MKL is free, it is not open source.

    Leave a comment:


  • damg
    replied
    Originally posted by edwaleni View Post

    I noticed that too. I know Intel has been trying to expose more functionality in their instruction set to improve its use in analytics, but I didn't think it was that efficient compared to bone stock Ubuntu. I am assuming that it is taking some of those parallel tasks and running them across the GMA 530. Caffe requires a flag be set to run across a GPU. Perhaps the flag is set to "on" in PTS, and Clear Linux exposes the GMA GPU correctly whereas Ubuntu doesn't.
    From the compiler flags, it seems they are using different BLAS library implementations. OpenBLAS appears to be much faster than CBLAS in this case.

    Leave a comment:


  • edwaleni
    replied
    Originally posted by chrisb View Post
    What's up with the Caffe result? Is Clear Linux using the GPU?
    I noticed that too. I know Intel has been trying to expose more functionality in their instruction set to improve its use in analytics, but I didn't think it was that efficient compared to bone stock Ubuntu. I am assuming that it is taking some of those parallel tasks and running them across the GMA 530. Caffe requires a flag be set to run across a GPU. Perhaps the flag is set to "on" in PTS, and Clear Linux exposes the GMA GPU correctly whereas Ubuntu doesn't.

    Leave a comment:


  • Linuxhippy
    replied
    Does PGO/LTO matter as an intermediate step if in the long-term it is going to converge into some kind of a cool C/C++ JIT compiler?
    Are there any indications things are developing this way?
    JITs with integrated profiling are great for achieving peak-performance of single applications, for system libraries / services I doubt they are a good idea - separate profiles for each application would mean a lot less code-sharing and latency issues. This is also something which hurts java quite a bit - each java process keeps the compilation results of the same library classes in its private memory space - because the genrated code is dependent on application specific behaviour. Great for performance, not that great for memory consumption. Just imagine each native linux application would copy libc, libstdc++, glib, zlib libxml ... into its private memory space.

    If you care about the performance, why do you care about those distros?
    Because I prefer mainstream distributions. However, there is nothing preventing mainstream distributions to invest a bit in performance.
    Actually, with RedHat depending bussiness-wise on RHEL, I can't actually see the reason why they are not using PGO/LTO to differentiate themself from all the available clones / other enterprose distributions - On average 10% more performance isn't that uninteresting when you have to pay per CPU/h in the cloud.

    Br

    Leave a comment:


  • Linuxhippy
    replied
    Does PGO/LTO matter as an intermediate step if in the long-term it is going to converge into some kind of a cool C/C++ JIT compiler?

    Are there any indications things are developing this way?
    JITs with integrated profiling are great for achieving peak-performance of single applications, for system libraries / services I doubt they are a good idea - separate profiles for each application would mean a lot less code-sharing and latency issues. This is also something which hurts java quite a bit - each java process keeps the compilation results of the same library classes in its private memory space - because the genrated code is dependent on application specific behaviour. Great for performance, not that great for memory consumption. Just imagine each native linux application would copy libc, libstdc++, glib, zlib libxml ... into its private memory space.

    If you care about the performance, why do you care about those distros?
    Because I prefer a mainstream distribution. However, there is nothing preventing mainstream distributions to invest a bit in performance.
    Actually, with RedHat depending bussiness-wise on RHEL, I can't actually see the reason why they are not using PGO/LTO to differentiate themself from all the available clones / other enterprose distributions - On average 10% more performance isn't that uninteresting when you have to pay per CPU/h in the cloud.

    Br

    Leave a comment:


  • Imerion
    replied
    How come XFCE gets lower results in some of the tests?

    Leave a comment:


  • Jabberwocky
    replied
    The initial post over Clear Linux left me very skeptical, this time around the review/benchmark catched the attention. I still don't find the name very suitable. E.g. We have a clear advantage with the "Caffe AlexNet" test.

    PS: Thanks for testing with similar mesa stacks Michael.

    Leave a comment:


  • vktgz
    replied
    Clear uses CPUFreq over P-State
    so intel distro does not use intel scaling driver? strange ...

    Ubuntu 16.10 makes use of ... xf86-video-modesetting
    and Clear uses what, I assume xf86-video-intel? so this is a comparision of different video drivers, or does intel distro also not using intel video driver?

    Leave a comment:


  • Dreakon
    replied
    It may be worth looking into Solus. The project founder of Solus also works on Clear and has implemented many of the optimizations into Solus. On top of that, they've also worked hard at optimizing all of the Steam runtimes to improve game performance as well Could make for some interesting benchmarks.

    Leave a comment:


  • caligula
    replied
    Originally posted by Linuxhippy View Post
    Modern compiler optimizations like profile-guided-optimizations or link-time-optimizations do make a real difference and actually pay off - firefox built with PGO loads web-pages 10-20% faster. Yes it is a lot of work, but it is really worth it - and the reason why the official firefox builds perform so much better compared to e.g. the firefox build probided by Fedora - despite using an outdated compiler (gcc 4.8.5).

    I wonder how much more proof it takes until mainstream distributions like fedora and ubuntu choose to PGO/LTO build at least their low-level system-packages (XOrg, Wayland, Mesa, glib, glibc, QT, GTK, cairo, freetype, libxml, ...) instead of building everything with -O2 -fno-strict-aliasing. With SSE2 included in the amd64 instruction set by default, there is so much a compiler can do, if it is provided with additional information what the code will actually do.
    If you care about the performance, why do you care about those distros?

    Leave a comment:

Working...
X