Announcement

Collapse
No announcement yet.

Mac OS X 10.5 vs. Ubuntu 8.10 Benchmarks

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by deanjo View Post
    With the modular nature of linux this is of no concern. It loads what it needs to support the hardware found.
    Thanks for clearing that up for me, I take it that appies to all the device drivers, kernal modules and the like.

    However, It doesn't apply to design phase and compile time optimisations. For eg, ubuntu is compiled to support many old archetectures back to (i think) the original pentium and therefor does not allow a lot of optimisations that could be applied to a single archetecture system. Even if (like SSE, SSE2, etc optimisations) they can be detected and used at runtime, there is still the dead weight of the legacy support code in there takeing up memory space. Some distros build for i386, i think ubuntu doesn't go that far back, only to the i586.

    This is why an well tuned gentoo system SHOULD run faster, load quicker and be lighter on memory usage than a system that is compiled to make sure it will work on every 32bit x86 archetecture you throw at it.

    I guess the X86-64 build SHOULD dodge most of this legacy junk, if done properly?

    Comment


    • #32
      Originally posted by Prosthetic Head View Post
      Thanks for clearing that up for me, I take it that appies to all the device drivers, kernal modules and the like.

      However, It doesn't apply to design phase and compile time optimisations. For eg, ubuntu is compiled to support many old archetectures back to (i think) the original pentium and therefor does not allow a lot of optimisations that could be applied to a single archetecture system. Even if (like SSE, SSE2, etc optimisations) they can be detected and used at runtime, there is still the dead weight of the legacy support code in there takeing up memory space. Some distros build for i386, i think ubuntu doesn't go that far back, only to the i586.

      This is why an well tuned gentoo system SHOULD run faster, load quicker and be lighter on memory usage than a system that is compiled to make sure it will work on every 32bit x86 archetecture you throw at it.

      I guess the X86-64 build SHOULD dodge most of this legacy junk, if done properly?
      Typically, GCC will read the capabilities of the processor and apply any features it finds in the host processor to the compilation. Same thing is done on OS X (Xcode is GCC with additional libraries for OS X). PTS does not use the precompiled libraries for it's tests. It compiles them from scratch.

      Comment


      • #33
        ok, the phoronix executive editor said (on page 2 of this thread i think)
        Each OS was left in their stock mode, which is why JFS or any other FS wasn't used.
        In 'stock mode' ubuntu is compiled for the lowest common denominator (ie i586, the original pentium). Ubuntu is distributed as precompiled binary so it is very much non-stock mode to compile it with different archetecture spescific optimisations.The test programs may have been compiled spescifically on each system, but they rely on librarys, kernel modules and the core of the kernel its self in order to work.

        I don't say this makes much difference to the outcome of the tests, just that its something inherently in the advantage of a restriceted OS-Hardware partnership.

        Comment


        • #34
          Whatever.

          I have personal experience with HFS+ and it _sucks_.

          Really. It's utter shit. You don't want it. Out of OS X, Linux, BSD, Windows, whatever.. hfs+ is the crustiest and most backward file system out there. It is a fat32-generation file system with BSD VFS layered above it to create the illusion of semi-POSIX-compatibility and journalling... both come with a big hit in performance.

          I can't really stress this enough. It's a slow, old fashioned FS that is much more likely to corrupt your data then Linux's Ext3.

          -------------------------------

          What is more likely happening is that MacOS is simply lying about file system operations. For example; With firefox one of the big performance misfeatures for Linux was that the application was called fsync() a hundred times a second. (Firefox folks are stupidly trying to make their sqlite database ACID compliant or some bizzare thing like that)

          With most operating systems, like Windows or OS X, the OS just ignores that sort of thing and lies to the application that it has sync'd to the harddrive, in order to improve the perception of performance. With Linux it's taken much more seriously actually does the sync'ng casing the harddrive to thrash around and freezes any application waiting on I/O.

          So you have to take any file system benchmark with a huge grain of salt.

          For example unless that Bonnie++ benchmark was done correctly it's completely and 100% misleading.

          The deal is that in order for it to be accurately determining the performance of the file system you have to drive the files out of cache and make sure that it's actually written to the drive. Otherwise your just judging memory I/O and Bonnie++ is not designed to do that accurately. And sync OS X lies about things like 'sync' (thus making it much likelier your data will be corrupted during any power outage or system crash) you can't trust anything the benchmark says otherwise.


          This system has 1GB of RAM. With running benchmarks I expect most of the OS is going to be pushed to cache and you'll have about 700 megs or more of file system cache you have to deal with.

          So with bonnie++ you'll have to make sure that the benchmarks are using files that are 1.5x times the amount of memory. Then you'll get a much more accurate response to disk and file system performance.

          In other words.. without actually seeing the settings used in the benchmarks the benchmarks are WORTHLESS and MISLEADING. No doubt about it. Disk and file system benchmarks are insanely hard to get right and while it's possible to draw conclusions from bonnie++, it's going to require much more then a simple graph. We need settings and other data output.

          -------------------------------------------

          Take a look at the Gzip performance benchmark. Your dealing with a 2GB file, so cache performance isn't going to enter into it much. With that Linux trounces OS X. The CPU is more then fast enough, even on these low-end boxes, make Gzip'ng a 2GB a I/O bound operation.. the cpu is capable of compressing much faster then the disk is capable of reading and writing files.

          Get it?

          --------------------------------

          Here is a example of what I am talking about:

          I am running Debian Unstable on a Core2Duo laptop. I have 2GB of RAM, 150GB harddrive, and a T7300 2.0ghz dual core CPU. The Gnome environment, with a couple terminals and web browser, actively uses about 256-300 MB of RAM. This leaves 1.7GB of RAM for application and file system cache.

          I have a 701MB AVI file that I am compressing using Gzip. AVI is already heavily compressed using media-optimized compression technology (aka mpeg4) so there is no way in hell Gzip is going to be able to improve on that. So it'll just thrash around and is a mostly worst-case scenario for this sort of application using up as much CPU resources as possible.


          First run is:
          $ time gzip -c Star\ Trek\ 10\ -\ Nemesis.avi > /dev/null

          real 1m51.576s
          user 0m52.631s
          sys 0m0.532s

          Second run is:
          $ time gzip -c Star\ Trek\ 10\ -\ Nemesis.avi > /dev/null

          real 0m52.124s
          user 0m51.323s
          sys 0m0.332s



          A minute of difference. A 100% improvement in performance in just two runs. This is because when reading the file from disk. However the second time the file was loaded into FS cache, so the disk was no longer the bottleneck. So the second time it was a CPU benchmark since it was CPU bound.

          --------------------------------


          So it's very important to include the settings for the benchmarks. Especially the FS. Because with FS benchmarks the numbers completely change their meanings depending on your settings.

          Comment


          • #35
            Thanks for that explanation, it makes a lot of sense.

            Comment


            • #36
              ReiserFS anyone?

              Ok, why not try to benchmark ubuntu running on ReiserFS partitions?
              I think that would had made a difference on the results...

              Comment


              • #37
                Originally posted by Prosthetic Head View Post
                Just one point and I don't know how important it is...
                OS-X and the mac-mini are spescifically designed for each other where as ubuntu has to support as much hardware as possable out of the box. So any general OS that comes with drivers built in will have to carry a lot of 'dead weight' and avoid over optimisation, where as an OS for very few hardware platforms can be highly optimised and shed all unnecessary components.
                Linux kernel itself outperforms others, so it must be something else - kernel configuration, problems with gcc or libraries (like well known performance degradation in MySQL with some library), different application configs and slower Intel drivers.

                P.S.

                Ubuntu loads tons of unnecessary drivers.

                Comment


                • #38
                  Originally posted by MaestroMaus View Post
                  Warning; hater alert!
                  Ubuntu Hater ? Me ? I used to love that distro in early 2007. But since 7.10, its starting to suck a lot. I did read the whole article, and it shows performance below my expectations. Losing to Macintosh is perhaps one of the worst things that can happen to ubuntu IMO. I switched to Archlinux and its 300% faster compared to ubuntu according to my personal experience.

                  Comment


                  • #39
                    Ouch

                    Looks like Ubuntu got a serious beat down. Very interesting comparison (or not?) But even if it came out on top this still fails to answer the question which is: Has Ubuntu has become slower over time?

                    Comment


                    • #40
                      Originally posted by D0M1N8R View Post
                      Looks like Ubuntu got a serious beat down. Very interesting comparison (or not?) But even if it came out on top this still fails to answer the question which is: Has Ubuntu has become slower over time?
                      Just read Drag's post... And no, it's not getting slower.

                      Comment


                      • #41
                        Ya.

                        Without more information it's hard to tell exactly what is going on with these benchmarks.

                        It's pretty plain that Linux OSS drivers are getting trounced by OS X. This is not surprising given the immaturity of the platform. (and X.org is a old platform. It's just very retarded, I guess. Lets hope it's a late bloomer)

                        The way I see it right now the developers for a long time were just trying to get the stupid thing run stable. After all we have no less then 3 graphics drivers operating on a single peice of hardware at any one time... your VGA or framebuffer console drivers. Your 2D DDX drivers and then your DRI/DRM drivers.

                        No less then 3 different projects with different approaches and ideologies working together. Linux kernel developers (DRM/VGA/Framebuffer), X.org drivers twiddling bits around on the PCI bus (DDX drivers), and then DRI drivers from Mesa and Xorg folks.


                        Just getting it to run was a challenge.

                        Hopefully with the modernization of the driver model and improvements to the X server we can start to see them concentrating more and more on performance.

                        Especially when they get the attention of the Linux kernel developers, which are all about performance, things should start to shape up.

                        For example:
                        http://lwn.net/Articles/305919/

                        It may be locked for now unless your a subscriber, but basically it's talking about taking a memory subsystem designed for allowing high-memory (above 1GB) to be used efficiently in 32bit systems and applying it to graphics memory management.

                        It lead to a 18x improvement in Quake3 performance and went from 85FPS to 360FPS on Glxgears.

                        Of course that was with the development memory-managed driver and not the ones anybody is using now (the ones in production are better optimized)

                        -----------------------


                        As far as the rest of the benchmarks in this article it's very difficult to make a solid conclusion about them.

                        With all these things they are using the same code compiled with the same compiler in both OS X and Ubuntu. So while interesting, it would be more interesting to investigate and determine exactly why the benchmarks get the results they get.

                        So it's important that readers be given settings and details so that they can see for themselves and accurately recreate what is being shown.

                        I suppose I am missing were this is laid out, so if anybody can help me I would be very greatful.

                        For example it can be a performance bug in Ubuntu. Maybe a compiler mis-setting or kernel bug is limiting performance. Maybe it's something easy to fix that could lead to a vast improvement in performance.

                        or it could be that the benchmarks are not being used properly and readers could point out improvements or changes that can help Phoronix improve how it reports things and the benchmark suites it's using.

                        I donno.

                        Comment


                        • #42
                          Some portals are publishing this really unfair benchmark... It's sad that Phoronix didn't anything to stop misleading people. Nothing remains except to inform those portals that Phoronix benchmarks can't be taken seriously.

                          Comment


                          • #43
                            Originally posted by kraftman View Post
                            Some portals are publishing this really unfair benchmark... It's sad that Phoronix didn't anything to stop misleading people. Nothing remains except to inform those portals that Phoronix benchmarks can't be taken seriously.
                            It's not unfair... It's showing the performance of two operating systems in their stock configurations.
                            Michael Larabel
                            http://www.michaellarabel.com/

                            Comment


                            • #44
                              Originally posted by Michael View Post
                              It's not unfair... It's showing the performance of two operating systems in their stock configurations.
                              So, it's real stupid benchmark. The systems should use the same settings if possible or at least very similar and you should let us know what settings were used. It looks like you're burying your heads in the sand.

                              Comment


                              • #45
                                Originally posted by kraftman View Post
                                The systems should use the same settings if possible or at least very similar and you should let us know what settings were used.
                                As aforementioned in the article, the only main change done was with Ubuntu and disabling Compiz. The screensavers in each OS were also disabled. Is there another setting you want to know about?
                                Michael Larabel
                                http://www.michaellarabel.com/

                                Comment

                                Working...
                                X