Announcement

Collapse
No announcement yet.

Antergos, Manjaro, CentOS, Debian, Ubuntu, Fedora & OpenSUSE Performance Showdown

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    So nice to see no foam in mouth Ubuntu is bloated bad spyware hatred so far. Hoping for best

    Comment


    • #12
      I have Ubuntu Manjaro and Fedora installed at my AMD Athlon X3 450 + Nvidia 750 Ti and Manjaro was the best for my use and in some game benchmarks I use to do.
      I suppose benchmarks between distros are also hardware related, for Xeons, that is the result, no doubt, but Is it the same for other Intel CPUs Atom i3 i5 and i7 and for gaming for Nvidia GPUs. I also suppose kernel number and GPU driver versions where the same in order to compare pears with pears.

      Comment


      • #13
        Antergos and Manjaro, based on Archlinux, offers many ways how to gain better performance. So please, do not interpret this test as a rigorous performance contest. Michal, what about a comparison of carefully optimised distros by skilled and performance oriented users? Anyway, thank you for your effort. Petr

        Comment


        • #14
          Originally posted by Linuxxx View Post

          Where do You see me talking about performance? I only talked about latency...
          Speaking of which, if the maximum latency of a frame is a lot higher on a distro with a so-called "low-latency" kernel, then users are going to notice these delayed frames, which will negatively impact their perception of the fluidity of a game!

          Also, You're incorrect regarding the use of (soft) real-time kernels:
          SteamOS uses a regular "generic" kernel, whereas both Antergos & Manjaro do in fact use one! ("uname -a" -> 'PREEMPT' is NOT your friend!)

          And if Michael will show us a comparison of frametimes on both Windows 10 and Linux, then everyone will be able to see that Linux is in fact the better platform to game on! (Well, IF the ports are of similar quality and a generic kernel is used...)
          Confirming. This is from an Arch install that used pacstrap defaults:
          Originally posted by uname -a
          Linux st-sony0 4.2.5-1-ARCH #1 SMP PREEMPT Tue Oct 27 08:13:28 CET 2015 x86_64 GNU/Linux
          I don't agree with your assessment that performance and latency are separate issues. Latency is one of the factors in the overall performance of a system. justmy2cents is right in that consistency and predictability are the goals of the RT / pre-empting work, not performance gains. Reduced latency was just one of the benefits of that work, but it was not the end goal in itself, and as I understand it, the reduced latency benefits of the RT / pre-empting work benefited the kernel as a whole, not just those kernels with the RT / pre-empting patchsets.

          Comment


          • #15
            I would love to see how openSUSE Leap compares to their current Tumbleweed release.

            Comment


            • #16
              I would love to see how openSUSE Leap compares to their current Tumbleweed release.

              Comment


              • #17
                Just curious, but why was 3.18 used in Manjaro when 4.1 is the default starting from 15.09?
                Last edited by Sothis6881; 07 November 2015, 06:55 PM. Reason: Clarification

                Comment


                • #18
                  Originally posted by korrode View Post
                  @Linuxxx

                  Can you tell me, why is it that on Fedora i basically can't use Firefox as a browser without NoScript addon (and even then it still ain't great) because just a few websites loading/running scripts makes the responsiveness of the system turn to garbage, even the little animated loading circles on the Firefox tabs jolt around and look like they're animated at about 1 frame every 2 seconds, but an openSUSE Gnome or Manjaro Gnome installation on the exact same hardware has no such issue?
                  (EDIT: and this is just one of many examples of resonsivness problems i noticed with Fedora)

                  I'll get around to directly testing how much full PREEMPT is effecting this at some point, but i'd be interested to hear your thoughts now anyways.
                  This is certainly interesting, as I haven't noticed such issues with Fedora myself (then again, I only recently started to use Fedora as my main desktop [Cinnamon], after I assembled a new PC with a SSD...)

                  Please do share Your results of the PREEMPT testing, since I'm certain that a lot of people would be interested in the outcome as well!

                  Comment


                  • #19
                    Originally posted by Serge View Post

                    Confirming. This is from an Arch install that used pacstrap defaults:


                    I don't agree with your assessment that performance and latency are separate issues. Latency is one of the factors in the overall performance of a system. justmy2cents is right in that consistency and predictability are the goals of the RT / pre-empting work, not performance gains. Reduced latency was just one of the benefits of that work, but it was not the end goal in itself, and as I understand it, the reduced latency benefits of the RT / pre-empting work benefited the kernel as a whole, not just those kernels with the RT / pre-empting patchsets.
                    I understand that consistency & predictability are the goals of real-time kernels, but how come then that these kernels exhibit actually higher latencies when it comes to frame-latency testing?
                    Shouldn't the predictability of latencies provide more consistency to frame-timing? Then how come these spikes are so much more pronounced with real-time kernels?

                    Comment


                    • #20
                      Originally posted by Linuxxx View Post

                      I understand that consistency & predictability are the goals of real-time kernels, but how come then that these kernels exhibit actually higher latencies when it comes to frame-latency testing?
                      Shouldn't the predictability of latencies provide more consistency to frame-timing? Then how come these spikes are so much more pronounced with real-time kernels?
                      I really don't know. If I had to guess, I'd say it's because pre-empting parts of the kernel that weren't designed to be pre-emptable induces extra overhead. But that's just my dumb guess.

                      Comment

                      Working...
                      X