Announcement

Collapse
No announcement yet.

Mac OS X 10.5 vs. Ubuntu 8.10 Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Ya.

    Without more information it's hard to tell exactly what is going on with these benchmarks.

    It's pretty plain that Linux OSS drivers are getting trounced by OS X. This is not surprising given the immaturity of the platform. (and X.org is a old platform. It's just very retarded, I guess. Lets hope it's a late bloomer)

    The way I see it right now the developers for a long time were just trying to get the stupid thing run stable. After all we have no less then 3 graphics drivers operating on a single peice of hardware at any one time... your VGA or framebuffer console drivers. Your 2D DDX drivers and then your DRI/DRM drivers.

    No less then 3 different projects with different approaches and ideologies working together. Linux kernel developers (DRM/VGA/Framebuffer), X.org drivers twiddling bits around on the PCI bus (DDX drivers), and then DRI drivers from Mesa and Xorg folks.


    Just getting it to run was a challenge.

    Hopefully with the modernization of the driver model and improvements to the X server we can start to see them concentrating more and more on performance.

    Especially when they get the attention of the Linux kernel developers, which are all about performance, things should start to shape up.

    For example:


    It may be locked for now unless your a subscriber, but basically it's talking about taking a memory subsystem designed for allowing high-memory (above 1GB) to be used efficiently in 32bit systems and applying it to graphics memory management.

    It lead to a 18x improvement in Quake3 performance and went from 85FPS to 360FPS on Glxgears.

    Of course that was with the development memory-managed driver and not the ones anybody is using now (the ones in production are better optimized)

    -----------------------


    As far as the rest of the benchmarks in this article it's very difficult to make a solid conclusion about them.

    With all these things they are using the same code compiled with the same compiler in both OS X and Ubuntu. So while interesting, it would be more interesting to investigate and determine exactly why the benchmarks get the results they get.

    So it's important that readers be given settings and details so that they can see for themselves and accurately recreate what is being shown.

    I suppose I am missing were this is laid out, so if anybody can help me I would be very greatful.

    For example it can be a performance bug in Ubuntu. Maybe a compiler mis-setting or kernel bug is limiting performance. Maybe it's something easy to fix that could lead to a vast improvement in performance.

    or it could be that the benchmarks are not being used properly and readers could point out improvements or changes that can help Phoronix improve how it reports things and the benchmark suites it's using.

    I donno.

    Comment


    • #42
      Some portals are publishing this really unfair benchmark... It's sad that Phoronix didn't anything to stop misleading people. Nothing remains except to inform those portals that Phoronix benchmarks can't be taken seriously.

      Comment


      • #43
        Originally posted by kraftman View Post
        Some portals are publishing this really unfair benchmark... It's sad that Phoronix didn't anything to stop misleading people. Nothing remains except to inform those portals that Phoronix benchmarks can't be taken seriously.
        It's not unfair... It's showing the performance of two operating systems in their stock configurations.
        Michael Larabel
        https://www.michaellarabel.com/

        Comment


        • #44
          Originally posted by Michael View Post
          It's not unfair... It's showing the performance of two operating systems in their stock configurations.
          So, it's real stupid benchmark. The systems should use the same settings if possible or at least very similar and you should let us know what settings were used. It looks like you're burying your heads in the sand.

          Comment


          • #45
            Originally posted by kraftman View Post
            The systems should use the same settings if possible or at least very similar and you should let us know what settings were used.
            As aforementioned in the article, the only main change done was with Ubuntu and disabling Compiz. The screensavers in each OS were also disabled. Is there another setting you want to know about?
            Michael Larabel
            https://www.michaellarabel.com/

            Comment


            • #46
              Originally posted by Michael View Post
              As aforementioned in the article, the only main change done was with Ubuntu and disabling Compiz. The screensavers in each OS were also disabled. Is there another setting you want to know about?
              Read Drag's post #34... You were benchmarking Mac OS cache performance and Ubuntu disk I/O in some tests.
              Last edited by kraftman; 09 November 2008, 09:30 AM.

              Comment


              • #47
                Originally posted by kraftman View Post
                Read Drag's post #34... You were benchmarking Mac OS cache performance and Ubuntu disk I/O in some tests.
                Hmmm... Indeed. However, at the user-level and experience, these "cheats" (as you seem to be making them out to be) are actually part of the vendor's advertised experience... So MacOS lies about it syncing to HDD to have a perceived improved performance, that is its default configuration. And as Michael said, that's actually what's being tested. Beyond this perceived impression about the performance delta in I/O (regardless of the fact that OSX may be lying to the application or not), a meta-analysis can yield whatever you like... The actual "performance" (or impression of it) from the tests is a whole other story.

                I have a few comments about the delta in performance with the graphics drivers:
                1. As far as I know, Apple make all their drivers in-house and have any number of NDA's signed with the manufacturers to get the specs, documentation, sample-code and what not. Which permits for highly optimized video drivers for Apple Mac products.
                2. The graphics stack in the case of Apple is an actual OpenGL ICD driver stack. Remember that Open Drivers rely on Mesa and while Mesa is an implementation of the OpenGL stack, it is not an official ICD or endorsed by the Khronos group in any way. I do believe that the current Mesa stack is not as optimized as it could be.


                I think overall this was a fair comparison. Maybe not all of us agree with it, and it would be very interesting to see results from Macs powered by nVidia graphics using the nVidia proprietary driver on Linux to see how do they perform, since this would level things up a bit: in that the nVidia driver DOES provide a different OpenGL stack not based on Mesa with its drivers, and maybe this stack is more similar to that used on MacOS' drivers. I don't know if AMD's OpenGL stack for fglrx is or is not based on Mesa (I do believe it is not)... There are no Intel Macs with ATi video hardware on them, are there? Or put another way, there are no fglrx drivers for PPC Linux either... And most likely the PPC MacOS drivers for Radeon blow away the OSS Radeon drivers currently available.

                Comment


                • #48
                  Originally posted by Thetargos View Post
                  Hmmm... Indeed. However, at the user-level and experience, these "cheats" (as you seem to be making them out to be) are actually part of the vendor's advertised experience... So MacOS lies about it syncing to HDD to have a perceived improved performance, that is its default configuration. And as Michael said, that's actually what's being tested. Beyond this perceived impression about the performance delta in I/O (regardless of the fact that OSX may be lying to the application or not), a meta-analysis can yield whatever you like... The actual "performance" (or impression of it) from the tests is a whole other story.

                  I have a few comments about the delta in performance with the graphics drivers:
                  1. As far as I know, Apple make all their drivers in-house and have any number of NDA's signed with the manufacturers to get the specs, documentation, sample-code and what not. Which permits for highly optimized video drivers for Apple Mac products.
                  2. The graphics stack in the case of Apple is an actual OpenGL ICD driver stack. Remember that Open Drivers rely on Mesa and while Mesa is an implementation of the OpenGL stack, it is not an official ICD or endorsed by the Khronos group in any way. I do believe that the current Mesa stack is not as optimized as it could be.


                  I think overall this was a fair comparison. Maybe not all of us agree with it, and it would be very interesting to see results from Macs powered by nVidia graphics using the nVidia proprietary driver on Linux to see how do they perform, since this would level things up a bit: in that the nVidia driver DOES provide a different OpenGL stack not based on Mesa with its drivers, and maybe this stack is more similar to that used on MacOS' drivers. I don't know if AMD's OpenGL stack for fglrx is or is not based on Mesa (I do believe it is not)... There are no Intel Macs with ATi video hardware on them, are there? Or put another way, there are no fglrx drivers for PPC Linux either... And most likely the PPC MacOS drivers for Radeon blow away the OSS Radeon drivers currently available.
                  Apple does not do the video drivers "in-house" and there most certainly is intel Macs with ATI graphics as well.



                  Last edited by deanjo; 10 November 2008, 02:00 AM.

                  Comment


                  • #49
                    Originally posted by Thetargos View Post
                    And as Michael said, that's actually what's being [b]tested
                    Yeah, but it's so idiotic benchmark that no one can imagine. It's like testing two graphic cards in Quake 3. First card using low quality mode and second using high quality, then say that first card is faster... In objective benchmark people use the same version of applications and the same settings. If not, benchmark is just piece of crap.

                    Comment


                    • #50
                      Originally posted by kraftman View Post
                      Yeah, but it's so idiotic benchmark that no one can imagine. It's like testing two graphic cards in Quake 3. First card using low quality mode and second using high quality, then say that first card is faster... In objective benchmark people use the same version of applications and the same settings. If not, benchmark is just piece of crap.
                      The applications ARE the same, even built from source on both platforms, and the options parsed by the PTS are also the same. The differences in the I/O tests, are not fault of the apps or the benchmarking tool to measure their performance, but rather how the individual systems handle the petitions being made by the applications (case in point, the SQL tests). What you are saying is more like comparing a BMW V8 engine to a Dodge HEMI engine. The HEMI can "shut off" four cylinders when they are not in use, hence being more fuel efficient, and may even have a bit of lag to power those up for when they're needed (acceleration lag, if you will). The BMW engine on the other hand, since all 8 cylinders are all active all the time, will not have said lag when accelerating[1]... Its inherent to the ENGINE, not the fuel used, or the road the cars are used.

                      Sure, you can tune both cars up and change them a LOT, but in their stock configuration one may have a bit of lag and the other not. Well this is exactly the same! The applications to assess the performance were all run in identical modes, the systems reacted to them in different ways. That seems to be clear now, and (duh!) these tests also served to show this difference between these two systems. Remember that (as much as we may hate it, since Linux appearsto lose) these tests are comparing (in a sort-of-objective manner) apples to oranges (pun intended!). Sure there are a LOT of things that could be modified now that we know how both systems react when faced to one another, and that these aspects could be changed in future tests... However, none of that would have been clear otherwise without testing in their default configuration. This is but the first series of tests, and thus this serves to be the foundation for generating a methodology to accurately and objectively test and compare more than one platform. What is idiotic is your attitude towards these results. They are not definitive, but you have to reckon that without a baseline how in the hell can you generate an objective, reproducible and accurate methodology to test across platforms (and that applies for Linux Vs MacOS; MacOS Vs FreeBSD; FreeBSD Vs OpenSolaris or all compared to one another).

                      The tools are objective enough, system settings may have to be fiddle with to get more consistent results and get around bottle necks (like MacOS ignoring fsync() calls). However without knowing how things work on their default, and rejecting those facts, is utterly stupid, otherwise how can you know what may be affecting performance? And don't forget that in this particular case, Apple chose to ship MacOS that way, even at the risk of corrupting data (but then again, no File System is ever free of data corruption). Don't forget either that Ext3 (dunno about Ext4) is actually of the fastest journaling systems when operating in full journaled mode, which in default configurations it does not, as do not neither of the other FSes that I know of.

                      Bottom line: it required the intervention of many people in this thread to determine why MacOS X gets better results with some applications run the same way compared to Ubuntu 8.10, and it turned out to be that the OS "cheats" by not flushing the buffers immediately, if anything, this discussion brought to the light this issue and is a thing to consider in any intensive HDD I/O tests in the future, but without these results and this discussion it may have gone undetected and without any consideration. The applications that were built from source on both platforms also reflect the contrast between GCC versions (yes, versions, as if you wanted to test against a Linux distro with the same GCC version, it'd had to be one of 1-1.5 years ago), and remember that GCC generates optimized code, so that may be one thing worth looking into as well (having both systems match -march -mcpu CCFLAGS and see what happens), but again, that is going out of the default.
                      1. I'm not saying that is the exact way these two engines work, is just an exemplification overly simplified, and what not...
                      Last edited by Thetargos; 10 November 2008, 04:38 AM.

                      Comment

                      Working...
                      X