Announcement

Collapse
No announcement yet.

Red Hat Enterprise Linux 6.0 Beta 2 Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Again, the data presented in this article would be misleading if someone is to decide which enterprise OS to use, looking at this article. The debugging code alone is reason enough not to compare them.

    The only real benchmark is running the application itself (the one you will be using in a production environment) on both OSes which are properly updated and configured. If there are still differences in performance - that are not caused by upstream - then and only then, you can say "distro X performs better than distro Y".

    Comment


    • #12
      ffmpeg: Ubuntu 1% faster than opensuse and the winner.
      7-zip: suse 2% faster than Ubuntu and 'virtualle the same'

      The bias is very hard to miss.

      Comment


      • #13
        I really wish Michael would either post the PTS results on the tracker or give details on the package selection. openSUSE by default installs the desktop kernel instead of the more server oriented -default kernel.

        Comment


        • #14
          ffmpeg: Ubuntu 1% faster than opensuse and the winner.
          7-zip: suse 2% faster than Ubuntu and 'virtualle the same'

          The bias is very hard to miss.
          Either that or Michael wrote it that way because the difference between the best and worst cases in the 7-zip was 4% while it was 13% in the ffmpeg-benchmark.

          Comment


          • #15
            well it happens in almost every article. If someone is faster than Ubuntu you have a good chance to find a 'virtually the same'. But with ubuntu leading...

            Comment


            • #16
              Or it could be that people not using Ubuntu sees such quotes more often than the rest of us , I think we need some real statistics to figure this out.

              Comment


              • #17
                With the EXT4 issues and all possibly slewing the issues. I'd really love to see a comparison of these where all the reading/writing is NOT done to the local disk. i.e. using NFS mounted for all your data and work.

                Reason being, is in general(for non home users), the local disk of a Linux/Unix server/workstation is usually just there for the OS, tmp, and the software they bundle with it. All the real data and custom software/apps are all stored on some type of NAS(even if that NAS may just be another Linux/Unix machine.

                I'd love to see some benchmark possibly of how the different OS's deal with reading/writing NFS mounted data.

                Comment


                • #18
                  Originally posted by matobinder View Post
                  With the EXT4 issues and all possibly slewing the issues. I'd really love to see a comparison of these where all the reading/writing is NOT done to the local disk. i.e. using NFS mounted for all your data and work.

                  Reason being, is in general(for non home users), the local disk of a Linux/Unix server/workstation is usually just there for the OS, tmp, and the software they bundle with it. All the real data and custom software/apps are all stored on some type of NAS(even if that NAS may just be another Linux/Unix machine.

                  I'd love to see some benchmark possibly of how the different OS's deal with reading/writing NFS mounted data.
                  and what fs to use as basis for the nfs on the server?

                  How about using tempfs for the tests?

                  I mean, if we are marching forward into the realm of 'no home user is doing that', it should be done right.

                  Comment


                  • #19
                    Originally posted by energyman View Post
                    and what fs to use as basis for the nfs on the server?

                    How about using tempfs for the tests?

                    I mean, if we are marching forward into the realm of 'no home user is doing that', it should be done right.
                    I guess I wouldn't care so much as to what the filesystem is being used on the NFS server side. I've just came across so many times in the past where one linux release performs very differently on NFS mounted system.
                    I deal with a variety of machines mostly RHEL 5.x boxes, but we've found kernel patches and things that really change the NFS performance, not just tunables, but bug fixes or so on. Example being, keeping the NFS mount options the same, we saw a big NFS performance increase going from 5.2 to 5.4. Our initial migration from RHEL 3.8 to 5.2 was a HUGE slowdown in NFS. Whatever changed between 5.2 to 5.4 fixed some of that. I'm no IS guy these days anymore, so I didn't pay attention to all the fixes that were made, but there was some directly related to NFS performance in the kernel patches.

                    I guess what I would really be looking for, is out of the box, without tweaking, how do the different distros perform for NFS related things. Maybe the only "tuning" would be to make sure you use the recommend mount options that the NAS vender wants.

                    Comment

                    Working...
                    X