Announcement

Collapse
No announcement yet.

Mac OS X 10.5 vs. Ubuntu 8.10 Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Thetargos
    replied
    Originally posted by deanjo View Post
    As his edited message says, not really linux but X is behind by several years.which is a pretty accurate statement IMHO.
    Actually I don't think it is/was... Linux became relevant as a desktop alternative fairly recently, and since focus has been placed on the desktop by various groups (Canonical's Ubuntu, SuSE's OpenSuSE, Red Hat's Fedora, and many, many other projects) Xorg development has been blazing fast with a LOT of new features in the past three-four years, to the point where Linux desktops (and some other Unix systems) have the prettiest and most useful desktop effects around. Sure the API is convoluted, big, messy and has been under several revisions since the switch from XFree86 to Xorg, but all in all, it has come together pretty nice. IIRC MacOS' Quartz was in development for at least four years before they could integrate it into OS X (ever since Jobs rejoined the company with his NextStep ideas and concepts), and Mac's focus has always been the desktop.

    X is slowly evolving, yes, no doubt, and most likely the Server will end up being replaced with the possibility of a fall back for old applications, much like the case in MacOS. However, how long for X to become completely obsolete and redundant?, I don't know, it may indeed never become redundant or obsolete, but it may very well cease to be the primary graphics system for Unix systems.

    Leave a comment:


  • kraftman
    replied
    Originally posted by deanjo View Post
    As his edited message says, not really linux but X is behind by several years.which is a pretty accurate statement IMHO.
    That's possible. Luckily, it seems that some people are paying more attention to X now, than some time ago - more frequently updates, they fix bugs faster etc. Maybe in the future Wayland will replace X Server on Linux.

    Leave a comment:


  • deanjo
    replied
    Originally posted by kraftman View Post
    @zAo_

    Linux is 5years + behind what? It's 2008 now, not 2013 (but, maybe is due to some mistake), so I recommend you to change your date in system settings. And, you just proved what I said in #51. Thanks a lot.
    As his edited message says, not really linux but X is behind by several years.which is a pretty accurate statement IMHO.

    Leave a comment:


  • kraftman
    replied
    @Thetargos,

    Yes, you're right. I hope that next benchmark will be a little more "fair". Thanks to Drag's, yours and few other people posts.

    @zAo_

    Linux is 5years + behind what? It's 2008 now, not 2013 (but, maybe is due to some mistake), so I recommend you to change your date in system settings. And, you just proved what I said in #51. Thanks a lot.
    Last edited by kraftman; 11 November 2008, 06:09 AM.

    Leave a comment:


  • zAo_
    replied
    I love working with debian based linux, but this proves it: Linux is 5years + behind.

    EDIT: nah; X is 7 years behind, Linux isn't.

    Leave a comment:


  • Thetargos
    replied
    Well, I see your point, but keep in mind that none of this was clear until this discussion took place. Maybe a follow up article with further investigation regarding how does MacOS handle fsync() and other things like the state of the drivers, etc, would be worth to clarify this issue.

    Leave a comment:


  • kraftman
    replied
    @Thetargos

    I've got your point, but those differences what are you talking about should be mentioned in the article. Without them many idiots think that Mac OS is faster. And why not use bigger file in bonnie++ benchmark?

    Leave a comment:


  • Thetargos
    replied
    Originally posted by kraftman View Post
    Yeah, but it's so idiotic benchmark that no one can imagine. It's like testing two graphic cards in Quake 3. First card using low quality mode and second using high quality, then say that first card is faster... In objective benchmark people use the same version of applications and the same settings. If not, benchmark is just piece of crap.
    The applications ARE the same, even built from source on both platforms, and the options parsed by the PTS are also the same. The differences in the I/O tests, are not fault of the apps or the benchmarking tool to measure their performance, but rather how the individual systems handle the petitions being made by the applications (case in point, the SQL tests). What you are saying is more like comparing a BMW V8 engine to a Dodge HEMI engine. The HEMI can "shut off" four cylinders when they are not in use, hence being more fuel efficient, and may even have a bit of lag to power those up for when they're needed (acceleration lag, if you will). The BMW engine on the other hand, since all 8 cylinders are all active all the time, will not have said lag when accelerating[1]... Its inherent to the ENGINE, not the fuel used, or the road the cars are used.

    Sure, you can tune both cars up and change them a LOT, but in their stock configuration one may have a bit of lag and the other not. Well this is exactly the same! The applications to assess the performance were all run in identical modes, the systems reacted to them in different ways. That seems to be clear now, and (duh!) these tests also served to show this difference between these two systems. Remember that (as much as we may hate it, since Linux appearsto lose) these tests are comparing (in a sort-of-objective manner) apples to oranges (pun intended!). Sure there are a LOT of things that could be modified now that we know how both systems react when faced to one another, and that these aspects could be changed in future tests... However, none of that would have been clear otherwise without testing in their default configuration. This is but the first series of tests, and thus this serves to be the foundation for generating a methodology to accurately and objectively test and compare more than one platform. What is idiotic is your attitude towards these results. They are not definitive, but you have to reckon that without a baseline how in the hell can you generate an objective, reproducible and accurate methodology to test across platforms (and that applies for Linux Vs MacOS; MacOS Vs FreeBSD; FreeBSD Vs OpenSolaris or all compared to one another).

    The tools are objective enough, system settings may have to be fiddle with to get more consistent results and get around bottle necks (like MacOS ignoring fsync() calls). However without knowing how things work on their default, and rejecting those facts, is utterly stupid, otherwise how can you know what may be affecting performance? And don't forget that in this particular case, Apple chose to ship MacOS that way, even at the risk of corrupting data (but then again, no File System is ever free of data corruption). Don't forget either that Ext3 (dunno about Ext4) is actually of the fastest journaling systems when operating in full journaled mode, which in default configurations it does not, as do not neither of the other FSes that I know of.

    Bottom line: it required the intervention of many people in this thread to determine why MacOS X gets better results with some applications run the same way compared to Ubuntu 8.10, and it turned out to be that the OS "cheats" by not flushing the buffers immediately, if anything, this discussion brought to the light this issue and is a thing to consider in any intensive HDD I/O tests in the future, but without these results and this discussion it may have gone undetected and without any consideration. The applications that were built from source on both platforms also reflect the contrast between GCC versions (yes, versions, as if you wanted to test against a Linux distro with the same GCC version, it'd had to be one of 1-1.5 years ago), and remember that GCC generates optimized code, so that may be one thing worth looking into as well (having both systems match -march -mcpu CCFLAGS and see what happens), but again, that is going out of the default.
    1. I'm not saying that is the exact way these two engines work, is just an exemplification overly simplified, and what not...
    Last edited by Thetargos; 10 November 2008, 04:38 AM.

    Leave a comment:


  • kraftman
    replied
    Originally posted by Thetargos View Post
    And as Michael said, that's actually what's being [b]tested
    Yeah, but it's so idiotic benchmark that no one can imagine. It's like testing two graphic cards in Quake 3. First card using low quality mode and second using high quality, then say that first card is faster... In objective benchmark people use the same version of applications and the same settings. If not, benchmark is just piece of crap.

    Leave a comment:


  • deanjo
    replied
    Originally posted by Thetargos View Post
    Hmmm... Indeed. However, at the user-level and experience, these "cheats" (as you seem to be making them out to be) are actually part of the vendor's advertised experience... So MacOS lies about it syncing to HDD to have a perceived improved performance, that is its default configuration. And as Michael said, that's actually what's being tested. Beyond this perceived impression about the performance delta in I/O (regardless of the fact that OSX may be lying to the application or not), a meta-analysis can yield whatever you like... The actual "performance" (or impression of it) from the tests is a whole other story.

    I have a few comments about the delta in performance with the graphics drivers:
    1. As far as I know, Apple make all their drivers in-house and have any number of NDA's signed with the manufacturers to get the specs, documentation, sample-code and what not. Which permits for highly optimized video drivers for Apple Mac products.
    2. The graphics stack in the case of Apple is an actual OpenGL ICD driver stack. Remember that Open Drivers rely on Mesa and while Mesa is an implementation of the OpenGL stack, it is not an official ICD or endorsed by the Khronos group in any way. I do believe that the current Mesa stack is not as optimized as it could be.


    I think overall this was a fair comparison. Maybe not all of us agree with it, and it would be very interesting to see results from Macs powered by nVidia graphics using the nVidia proprietary driver on Linux to see how do they perform, since this would level things up a bit: in that the nVidia driver DOES provide a different OpenGL stack not based on Mesa with its drivers, and maybe this stack is more similar to that used on MacOS' drivers. I don't know if AMD's OpenGL stack for fglrx is or is not based on Mesa (I do believe it is not)... There are no Intel Macs with ATi video hardware on them, are there? Or put another way, there are no fglrx drivers for PPC Linux either... And most likely the PPC MacOS drivers for Radeon blow away the OSS Radeon drivers currently available.
    Apple does not do the video drivers "in-house" and there most certainly is intel Macs with ATI graphics as well.



    Last edited by deanjo; 10 November 2008, 02:00 AM.

    Leave a comment:

Working...
X