Announcement

Collapse
No announcement yet.

Ubuntu 9.04 vs. Mac OS X 10.5.6 Benchmarks

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #46
    The reality of the matter is that with this particular computer OS X is able to offer superior performance over Linux in many of the benchmarks.

    It's obvious that OS X developers put a lot of time and effort optimizing and tweaking OS X to work as fast as possible on this machine.

    That is one of the nice things about having a sharp focus. OS X developers are able to concentrate on specific hardware configurations and on specific uses. While Linux developers are producing a much more general purpose OS and it is much more modular, portable, and diverse. There are lots of layers and complexity that are designed into the system to allow this.

    There is a definate penalty for doing this. In the past Linux was almost always faster then OS X.. but nowadays not so much.. OS X has caught up apparently. It would be interesting to see benchmarks of higher level of complexity that create higher loads on a system and exhaust the file systme buffers and see if the Linux scedualling and scalability come into play.

    Also it would be interesting to see the performance on higher end hardware.

    -------------------------------


    As far as proprietary drivers go... Nvidia uses the same general OpenGL and driver code for all the OSes that they produce the drivers for.

    For the longest time Apples sucked for graphics performance because it was Apple developers who were writing the drivers... but apparently they've decided to switch to the approach that Linux users use... which is to shove a bunch of code originally designed and developed for Windows into their systems.

    ----------------------

    Keep in mind that there are still other benefits from using the Linux approach to modular operating systems.

    The package system is superior to what Apple uses... they are able to make things pretty usable and very easy, but technically the package repository approach is a superior one.

    Also Linux has much much better hardware support.

    A few examples of my machines...
    A)
    A Dell Mini-9 with 16GB of disk and 1GB of RAM. OS X can install on it, but the wireless network is flaky, the wired port is unusable, the sound is flaky, and it takes a lot of effort to get hardware working as well as it can. And on this class of hardware OS X runs poorly.

    Meanwhile with Linux on that system I have a snappy system with somewhat lousy graphics performance.. but compiz still works and is quite usable with little effort.

    Uses about 256MB of RAM ideling with a browser window or two open.

    B)
    This laptop. Running Fedora 11 beta. Core 2 duo with 4GB of RAM, 320 gig 7200rpm harddrive, intel graphics, etc.

    Got at a fraction of the price that similar hardware would cost from Apple.

    Fedora runs great. Suspend works. Wifi works. My wacom tablet is now completely plug-n-play with extended input support and everything.

    On top of that I have KVM which I use with Virt-manager. I have XP, Vista, Windows 7, Ubuntu, Debian, FreeBSD, NetBSD, and even OSX installed in the VM. All of them work pretty well except for OSX..

    I use rdesktop to access the XP VM for times when I need to use MS Office.

    The thing is fabulous and is a hog. With XP and full blown Gnome running on it consumes almost a 1 and a half gigs of RAM ideling. This is in 'high productivity' mode for when I need to get work for work done.

    C)
    My Dell Inspiron 4100.

    A fabulous machine, it's being slowly being turned into a lovely hack box.

    Uses Debian Sid. One of hte best operating systems you could ever use for anything. Usuability sucks somewhat, but I dont' care... this thing is for fun. This I rescued from work. They were going to throw it away. To slow, ran like crap, crashed, hanged, refused to boot... they thought it was hardware issues... It was just Windows sucking so hard it removed all the oxygen from the room.

    Pentium III 1.13ghz, 512MB of RAM, 40GB harddrive, some ancient radeon mobility with 16MB of vram. I discarded the DVD drive and jammed both modular bays with batteries with the most life left I could fine. Has about 4-5 hours battery life.

    This thing is _FANTASTIC_. Using Debian Sid it is running better then it ever did in it's entire life. Seriously. Blows XP right out of the water.

    Midori runs fast and is very responsive. Uses about half the RAM that Firefox does. I am using the LXDE desktop environment. I added on some Gnome features I 'require' like network-manager and gnome-power-manager, but it's still mostly LXDE.

    LXDE is to XCFE what XCFE is to Vista.

    With the couple gnome-add-ons I have the system uses a total of 50MB memory to run, before openning up any applications. Suspend works perfectly. Suspend-to-disk and Supsend-to-ram.

    I am going to get a couple Alfa 500mw 802.11g USB adapters, make some homemade high-gain directional antennas.. get that working with my GPS devices and some mapping software and go have some fun wardriving. With that configuration I should be able to pick up a wifi access point for up to a mile away.

    I can watch DVD rip movies on it. It's fast, it's repsonsive it sips only a small amount of power (For this era of laptop) and there is no way in hell you could ever possibly get OS X (Or Vista, or Windows 7) to run nearly as well.
    Last edited by drag; 05-13-2009, 04:57 AM.

    Comment


    • #47
      Ubuntu 9.04 vs OS X 10.5.6

      I think the best thing to test these 2 operating systems is to install ubuntu 9.04 as a sole OS on a mac computer and the other one is a mac with OS X 10.5.6 installed natively.

      Or we could use similar hardware configuration to the mac computer with an PC intel X86 computer. That will be very exciting to see...

      Comment


      • #48
        Originally posted by jybumaat View Post
        I think the best thing to test these 2 operating systems is to install ubuntu 9.04 as a sole OS on a mac computer and the other one is a mac with OS X 10.5.6 installed natively.

        Or we could use similar hardware configuration to the mac computer with an PC intel X86 computer. That will be very exciting to see...
        Hardware has probably nothing to those results. Using noatime, using EXT4 with delayed allocation will probably speed up things a lot. Sadly, Phoronix benchmarks only defaults when comes to operating systems and then some people believe OS X is faster in Postgre or Mysql then Linux, FreeBSD, Solaris...

        P.S. Why there's so many OS X advertisements now? On the left, on the right on top... :>
        Last edited by kraftman; 05-13-2009, 09:24 AM.

        Comment


        • #49
          Personally I find the benchmark incorrect because it compares two systems that operate in different modes. In my opinion (and please correct me if I'm wrong), Mac OS X, runs 64bit applications even though the kernel itself runs in 32bit mode:

          Because device drivers in operating systems with monolithic kernels, and in many operating systems with hybrid kernels, execute within the operating system kernel, it is possible to run the kernel as a 32-bit process while still supporting 64-bit user processes. This provides the memory and performance benefits of 64-bit for users without breaking binary compatibility with existing 32-bit device drivers, at the cost of some additional overhead within the kernel. This is the mechanism by which Mac OS X enables 64-bit processes while still supporting 32-bit device drivers.
          (from http://en.wikipedia.org/wiki/64-bit)

          Mac OS X uses an extension of the Universal binary format to package 32- and 64-bit versions of application and library code into a single file; the most appropriate version is automatically selected at load time.
          ...
          Mac OS X v10.4.7 and higher versions of Mac OS X v10.4 run 64-bit command-line tools using the POSIX and math libraries on 64-bit Intel-based machines, just as all versions of Mac OS X v10.4 and higher run them on 64-bit PowerPC machines. No other libraries or frameworks work with 64-bit applications in Mac OS X v10.4
          (from http://en.wikipedia.org/wiki/X86-64)

          So what happens in my opinion is that the tests are executed in 64bit mode, and, not surprisingly, the results of Ubuntu are worse in this case. What brought me to this idea is the extremely bad performance of Crafty, while there should be no difference in performance at all. What Crafty does is mainly evaluating moves, accessing transposition table and using threads to utilize more processors. So apart from the thread usage, which involves system calls (but Linux is generally fast when creating threads), the rest of the program avoids any system calls and should be OS independent. On the other hand, 64bit versus 32bit application performance is very noticeable because of more efficient chessboard representation in 64bit mode.

          In my opinion, when comparing Mac OS X and Ubuntu 64bit the difference of these two systems will be marginal (of course apart from the graphics benchmarks and possibly that SQLite regression). I would even expect Ubuntu to be slightly better. I also think that ext4 should be used for comparison - first, it will probably become the default file system in 9.10, second, it would be more interesting to see the performance of "the state of the art" FS (well, almost, until replaced by something better like btrfs) instead of an old FS with known performance issues.

          Comment


          • #50
            Originally posted by drag View Post
            Well... The kernel can't do that. The firmware on the harddrives decide what gets written to the platter. That sort of thing isn't going to be something the kernel has control over.

            The best the OS can do is flush it's own buffers and then send a request to the drive to flush it's buffers and sometimes the firmware lies about it and other times it doesn't. It usually lies.

            but that sort of behavior would affect both Linux and OS X equally so it wouldn't give a performance advantage to either OS over the other.

            Otherwise that's a nice insight and I am still reading other replies in this thread.

            I'm just nit picking.
            Utimately yes, firmware does decide data's fate. If the firmware is giving false responses back then there is nothing a OS can really do to effect that. If that is the case though it should effect all OS's. As a side note there is something drastically wrong when it comes to SQLite performance in Ext3. Switch to XFS and you will get much faster results and Ext3 has been getting slower since around ~2.6.18. It's something I have noticed for a while now.

            Comment


            • #51
              Originally posted by mendieta View Post
              Michael, I know you mentioned this in the article, but the Intel drivers still have many issues in 9.04, of course it would be beat easily - This article is a reflection of that. It is not representative of ubuntu 9.04 vs OS X 10.5.6

              A more fair comparison would be a desktop with a graphics card that is well supported, maybe at this point some NVIDIA card, or one of the ATI's that are acutally running with fglrx in 9.04 ;-)
              I find it even more interesting that he did use the intel graphics for the tests. It shows just how bad they are in linux.

              Comment


              • #52
                The majority of Linux users today use 64bit operating systems since there are serious perfomance boost issues and no compatibility problems anymore. So why at first place these benchmarks took place with a 32 bit version of Ubuntu I can hardly understand. 64 bit would make the difference in ffmpeg, ogg and lame encoding and I don't speak for tweaking and hacking just a very dekstop choice. Currently the benchmarks shows 17 vs 12 for MacOSX favour while with 64 bit the result would be 14 vs 15 for Ubuntu's favour pretty easily.
                A third benchmark with the 64 bit version of Ubuntu is essential imho.

                Comment


                • #53
                  Originally posted by drag View Post
                  Holy crap, NO.

                  First off.. The kernel OS X uses is NOT A MICROKERNEL.

                  The OS X kernel is called XNU. It's a so-called 'hybrid' kernel that uses code from a development kernel that died in 1995 combined with BSD stuff. The Mach kernel at different times in it's history was a microkernel and then not a microkernel. OS X does not use a Microkernel.

                  The Windows NT kernel was another one that was based on a Microkernel design, but is not a Microkernel. Early versions of NT were microkernels, but unfortunately for that design Microsoft could not figure out how to make it scale and the excess overhead caused by the message-passing design doomed it. So later versions of the NT kernel were monolythic.

                  If you want to you can call them 'hybrid kernels', but I think that is just a made-up term to make the OS kernel sound all microkernel-ish and cool while it is, in fact, a modular monolythic design.


                  Nope. Not going to happen. Microkernels were essentially a pipe dream and only one Microkernel-based OS actually made it into widespread use. That OS was QNX and was popular for embedded systems due to it's realtime-like nature.

                  But it wouldn't scale to anything big and nobody wanted to use it as a desktop or server platform.
                  Ya I agree with ya on the most of your points. But I'd argue that microkernels are all over the place. In fact. I'd call every single memory and thread manager of every single SQL database system today a microkernel implimentation hybridized to their monolithic OS.

                  Comment


                  • #54
                    Originally posted by deanjo View Post
                    Utimately yes, firmware does decide data's fate. If the firmware is giving false responses back then there is nothing a OS can really do to effect that. If that is the case though it should effect all OS's. As a side note there is something drastically wrong when it comes to SQLite performance in Ext3. Switch to XFS and you will get much faster results and Ext3 has been getting slower since around ~2.6.18. It's something I have noticed for a while now.

                    I did some SQLite tests in KVM and Ext3 is really slow when compared to Ext4:

                    http://img75.imageshack.us/img75/671...ultssqlite.png

                    http://img75.imageshack.us/img75/933...ultssqlite.png

                    It probably means Ext4 and HFS+ are using cache and Ext3 isn't. Or Ext 3 just sux

                    P.S. results are reproducible.

                    P.S.2 If TeeKee is right (and probably is) this Phoronix benchmark sux a little...
                    Last edited by kraftman; 05-13-2009, 10:51 AM.

                    Comment


                    • #55
                      Does room for error exist? absolutely! I think more tests need to be done using mac systems for sure! Why not put a MAC server against a debian server and compare how several linux desktop distros performs under a mac system. Put both of them in X86_64 and build linux with the same optimizations you get out of Intel macs, ie. SSE instructions. The Linux kernel doesn't even do things to benefit from this to the best of my knowledge.

                      But as i see it now. We got our asses handed to us. Is it sad? u better bet it is. But we know we can improve. Excuses are excuses... The whole fedora is amazing thing is a pile because we all know the difference is nominal at best.

                      We lost... lets not act like 9 year olds and debate that we really didn't. Certainly we can be grown men (and women) and discuss how we can actually make the situation better.

                      Comment


                      • #56
                        Originally posted by Hephasteus View Post
                        Ya I agree with ya on the most of your points. But I'd argue that microkernels are all over the place. In fact. I'd call every single memory and thread manager of every single SQL database system today a microkernel implimentation hybridized to their monolithic OS.

                        I don't know anything about that.. but I do know that having a threaded model and memory management isn't something that is unique to SQL databases. Pretty much every large multi-threaded application is going to have to management it's memory and threads and such.


                        Remember what makes a microkernel a Microkernel is that the actual kernel doesn't do anything more then message-passing.

                        Then various seperate processes 'orbit' that kernel and provides services that the OS can use. Like the 'Hurd' is a collection of programs that provide low-level facilities to a L4 kernel. So you'd have a program that provides access to the harddrive, then another program that provides file system access, then another that provides POSIX APIs, etc etc.

                        So all the kernel does is then pass messages from one service daemon to another. It has zero functionality beyond that.

                        And, perversely, Microkernels tend to be hugely complicated. They are usually quite a bit larger then a Monolythic kernel even though they have no functionality built-in besides message handling.

                        It's pretty obvious why they are not really that successfull if you can step back and look at what they really are.


                        ----------------

                        Now a modern monolythic kernel, like the Linux kernel, is a big object-oriented, multithreaded monster. Each major feature has it's own thread and there are lot of different small 'kernel level program'-type things that provide services and features that are used by the rest of the kernel. The difference being is that there is no message passing going on and they all occupy the same address space so one can twiddle the other's bits and read each other's memory in a very efficient manner.

                        This is why proprietary software like Nvidia's drivers that while high performance and have lots of features tend to like to stuff huge amounts of code into the kernel tend to suck. Then the nvidia driver, at any time, can abritrarially access and overwrite any other part of the kernel. If nvidia drivers have a memory overflow or other hicccup it can easily blow away the memory containing... say.. your Ext3 support.

                        With normal applications each one occupies it's own Virtual Memory space. That is each application see it's own unique address space. To the application all they see is their own virtual 4GB of RAM that they can do with whatever they will. This is the 'virtual' part of Virtual Memory. Each application has it's own VM sandbox and it's very difficult for that application to break out of it's memory sandbox... it can't even see what is going on in the memory of other applications. Now with kernel modules in Linux there is no memory protection features like that and a kernel module can very easily access any other part of the kernel and view and edit any other part of the running kernel.

                        There really isn't anything that would stop it and is the major design deficiency of a Monolythic kernel.

                        This is why the video OSS driver model tries to shove as much as the video driver into userspace as possible... the kernel portion is kept as small as possible and the majority of the video processing happens via the DRI2 protocol in userspace.
                        Last edited by drag; 05-13-2009, 12:53 PM.

                        Comment


                        • #57
                          Thank you for sharing your OS understandings. This has been a very enlightening thread. It's good to see more and more daemons running on linux and I see more clearly that a herd of daemons might not be that great. It'll be interesting watching them hybridize the kernel as it moves to more advanced video handling. Hope they do a good job. But it's looking like a bumpy ride so far. Can't last forever though.

                          Comment


                          • #58
                            Originally posted by L33F3R View Post
                            Does room for error exist? absolutely! I think more tests need to be done using mac systems for sure! Why not put a MAC server against a debian server and compare how several linux desktop distros performs under a mac system. Put both of them in X86_64 and build linux with the same optimizations you get out of Intel macs, ie. SSE instructions. The Linux kernel doesn't even do things to benefit from this to the best of my knowledge.

                            But as i see it now. We got our asses handed to us. Is it sad? u better bet it is. But we know we can improve. Excuses are excuses... The whole fedora is amazing thing is a pile because we all know the difference is nominal at best.

                            We lost... lets not act like 9 year olds and debate that we really didn't. Certainly we can be grown men (and women) and discuss how we can actually make the situation better.
                            To make the situation better is to make a 32 bit OS to act as if it was 64 bit. You admit defeat only after a fair battle.

                            Comment


                            • #59
                              Originally posted by Apopas View Post
                              To make the situation better is to make a 32 bit OS to act as if it was 64 bit. You admit defeat only after a fair battle.
                              Well you could be a child and fight over details or you could move on and improve your product. This principal has been shown in Japanese business and has proven to be quite successful. You cant have a fair battle on two different platforms, one of which consists of in-house hardware; apple is going to take the advantage in any OS fight because of this.

                              That brings up a good plus for Linux, unlike mac it can use a large variety of hardware. Historically problems have erupted with hardware drivers but I have noticed that in recent time the driver situation has been getting alot better. Additionally I can build a $300 computer and play ETQW on high quality with linux; the mac mini is $600 and has moderate HDD/RAM at best.

                              Dont look at the situation from a linear perspective. I agree more tests need to be done but lets not forget theres room for improvement.

                              Comment


                              • #60
                                Originally posted by L33F3R View Post
                                You cant have a fair battle on two different platforms, one of which consists of in-house hardware; apple is going to take the advantage in any OS fight because of this.
                                Exactly that's my point. I believe even in in-house Apple's hardware Linux will have the best performance. Just use the newest of it and the newest is the 64 bit one. When we have it and it's easy to install it then why to stick on the old one? It is not a minor detail, it's the logical choice.

                                Comment

                                Working...
                                X