Announcement

Collapse
No announcement yet.

Mac OS X 10.5 vs. Ubuntu 8.10 Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by deanjo View Post
    With the modular nature of linux this is of no concern. It loads what it needs to support the hardware found.
    Thanks for clearing that up for me, I take it that appies to all the device drivers, kernal modules and the like.

    However, It doesn't apply to design phase and compile time optimisations. For eg, ubuntu is compiled to support many old archetectures back to (i think) the original pentium and therefor does not allow a lot of optimisations that could be applied to a single archetecture system. Even if (like SSE, SSE2, etc optimisations) they can be detected and used at runtime, there is still the dead weight of the legacy support code in there takeing up memory space. Some distros build for i386, i think ubuntu doesn't go that far back, only to the i586.

    This is why an well tuned gentoo system SHOULD run faster, load quicker and be lighter on memory usage than a system that is compiled to make sure it will work on every 32bit x86 archetecture you throw at it.

    I guess the X86-64 build SHOULD dodge most of this legacy junk, if done properly?

    Comment


    • #32
      Originally posted by Prosthetic Head View Post
      Thanks for clearing that up for me, I take it that appies to all the device drivers, kernal modules and the like.

      However, It doesn't apply to design phase and compile time optimisations. For eg, ubuntu is compiled to support many old archetectures back to (i think) the original pentium and therefor does not allow a lot of optimisations that could be applied to a single archetecture system. Even if (like SSE, SSE2, etc optimisations) they can be detected and used at runtime, there is still the dead weight of the legacy support code in there takeing up memory space. Some distros build for i386, i think ubuntu doesn't go that far back, only to the i586.

      This is why an well tuned gentoo system SHOULD run faster, load quicker and be lighter on memory usage than a system that is compiled to make sure it will work on every 32bit x86 archetecture you throw at it.

      I guess the X86-64 build SHOULD dodge most of this legacy junk, if done properly?
      Typically, GCC will read the capabilities of the processor and apply any features it finds in the host processor to the compilation. Same thing is done on OS X (Xcode is GCC with additional libraries for OS X). PTS does not use the precompiled libraries for it's tests. It compiles them from scratch.

      Comment


      • #33
        ok, the phoronix executive editor said (on page 2 of this thread i think)
        Each OS was left in their stock mode, which is why JFS or any other FS wasn't used.
        In 'stock mode' ubuntu is compiled for the lowest common denominator (ie i586, the original pentium). Ubuntu is distributed as precompiled binary so it is very much non-stock mode to compile it with different archetecture spescific optimisations.The test programs may have been compiled spescifically on each system, but they rely on librarys, kernel modules and the core of the kernel its self in order to work.

        I don't say this makes much difference to the outcome of the tests, just that its something inherently in the advantage of a restriceted OS-Hardware partnership.

        Comment


        • #34
          Whatever.

          I have personal experience with HFS+ and it _sucks_.

          Really. It's utter shit. You don't want it. Out of OS X, Linux, BSD, Windows, whatever.. hfs+ is the crustiest and most backward file system out there. It is a fat32-generation file system with BSD VFS layered above it to create the illusion of semi-POSIX-compatibility and journalling... both come with a big hit in performance.

          I can't really stress this enough. It's a slow, old fashioned FS that is much more likely to corrupt your data then Linux's Ext3.

          -------------------------------

          What is more likely happening is that MacOS is simply lying about file system operations. For example; With firefox one of the big performance misfeatures for Linux was that the application was called fsync() a hundred times a second. (Firefox folks are stupidly trying to make their sqlite database ACID compliant or some bizzare thing like that)

          With most operating systems, like Windows or OS X, the OS just ignores that sort of thing and lies to the application that it has sync'd to the harddrive, in order to improve the perception of performance. With Linux it's taken much more seriously actually does the sync'ng casing the harddrive to thrash around and freezes any application waiting on I/O.

          So you have to take any file system benchmark with a huge grain of salt.

          For example unless that Bonnie++ benchmark was done correctly it's completely and 100% misleading.

          The deal is that in order for it to be accurately determining the performance of the file system you have to drive the files out of cache and make sure that it's actually written to the drive. Otherwise your just judging memory I/O and Bonnie++ is not designed to do that accurately. And sync OS X lies about things like 'sync' (thus making it much likelier your data will be corrupted during any power outage or system crash) you can't trust anything the benchmark says otherwise.


          This system has 1GB of RAM. With running benchmarks I expect most of the OS is going to be pushed to cache and you'll have about 700 megs or more of file system cache you have to deal with.

          So with bonnie++ you'll have to make sure that the benchmarks are using files that are 1.5x times the amount of memory. Then you'll get a much more accurate response to disk and file system performance.

          In other words.. without actually seeing the settings used in the benchmarks the benchmarks are WORTHLESS and MISLEADING. No doubt about it. Disk and file system benchmarks are insanely hard to get right and while it's possible to draw conclusions from bonnie++, it's going to require much more then a simple graph. We need settings and other data output.

          -------------------------------------------

          Take a look at the Gzip performance benchmark. Your dealing with a 2GB file, so cache performance isn't going to enter into it much. With that Linux trounces OS X. The CPU is more then fast enough, even on these low-end boxes, make Gzip'ng a 2GB a I/O bound operation.. the cpu is capable of compressing much faster then the disk is capable of reading and writing files.

          Get it?

          --------------------------------

          Here is a example of what I am talking about:

          I am running Debian Unstable on a Core2Duo laptop. I have 2GB of RAM, 150GB harddrive, and a T7300 2.0ghz dual core CPU. The Gnome environment, with a couple terminals and web browser, actively uses about 256-300 MB of RAM. This leaves 1.7GB of RAM for application and file system cache.

          I have a 701MB AVI file that I am compressing using Gzip. AVI is already heavily compressed using media-optimized compression technology (aka mpeg4) so there is no way in hell Gzip is going to be able to improve on that. So it'll just thrash around and is a mostly worst-case scenario for this sort of application using up as much CPU resources as possible.


          First run is:
          $ time gzip -c Star\ Trek\ 10\ -\ Nemesis.avi > /dev/null

          real 1m51.576s
          user 0m52.631s
          sys 0m0.532s

          Second run is:
          $ time gzip -c Star\ Trek\ 10\ -\ Nemesis.avi > /dev/null

          real 0m52.124s
          user 0m51.323s
          sys 0m0.332s



          A minute of difference. A 100% improvement in performance in just two runs. This is because when reading the file from disk. However the second time the file was loaded into FS cache, so the disk was no longer the bottleneck. So the second time it was a CPU benchmark since it was CPU bound.

          --------------------------------


          So it's very important to include the settings for the benchmarks. Especially the FS. Because with FS benchmarks the numbers completely change their meanings depending on your settings.

          Comment


          • #35
            Thanks for that explanation, it makes a lot of sense.

            Comment


            • #36
              ReiserFS anyone?

              Ok, why not try to benchmark ubuntu running on ReiserFS partitions?
              I think that would had made a difference on the results...

              Comment


              • #37
                Originally posted by Prosthetic Head View Post
                Just one point and I don't know how important it is...
                OS-X and the mac-mini are spescifically designed for each other where as ubuntu has to support as much hardware as possable out of the box. So any general OS that comes with drivers built in will have to carry a lot of 'dead weight' and avoid over optimisation, where as an OS for very few hardware platforms can be highly optimised and shed all unnecessary components.
                Linux kernel itself outperforms others, so it must be something else - kernel configuration, problems with gcc or libraries (like well known performance degradation in MySQL with some library), different application configs and slower Intel drivers.

                P.S.

                Ubuntu loads tons of unnecessary drivers.

                Comment


                • #38
                  Originally posted by MaestroMaus View Post
                  Warning; hater alert!
                  Ubuntu Hater ? Me ? I used to love that distro in early 2007. But since 7.10, its starting to suck a lot. I did read the whole article, and it shows performance below my expectations. Losing to Macintosh is perhaps one of the worst things that can happen to ubuntu IMO. I switched to Archlinux and its 300% faster compared to ubuntu according to my personal experience.

                  Comment


                  • #39
                    Ouch

                    Looks like Ubuntu got a serious beat down. Very interesting comparison (or not?) But even if it came out on top this still fails to answer the question which is: Has Ubuntu has become slower over time?

                    Comment


                    • #40
                      Originally posted by D0M1N8R View Post
                      Looks like Ubuntu got a serious beat down. Very interesting comparison (or not?) But even if it came out on top this still fails to answer the question which is: Has Ubuntu has become slower over time?
                      Just read Drag's post... And no, it's not getting slower.

                      Comment

                      Working...
                      X