Announcement

Collapse
No announcement yet.

Mac OS X 10.5 vs. Ubuntu 8.10 Benchmarks

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • deanjo
    replied
    Originally posted by kraftman View Post
    You meant X, right? In many other things Mac OS looks just lame in comparison to Linux. In my opinion KDE4 just kills Mac OS desktop and that's what simple user sees. He doesn't know if system is using X or Quartz and it's meaningless for him. Can you explain me what's so cool in Quartz?
    The statement was made in general towards X that is correct but also applies for the upcoming technologies that 10.6 is bringing such as Grand Central, openCL, hd acceleration etc. Things like multihead displays, 10,000 xorg configs are not needed in OS X. Plug and play. Even the video drivers are simply handled through OS updates. It's been like that from the start. Linux is just starting trying to enable such functionality and still has a ways to go. I'm sure the fact that the chipset had to use Mesa in linux did not help performance in the games as well.


    Maybe drive in the benchmark does not support fsync and Ubuntu tryes to do it other way - by emulation etc. and that's why it's slow (but who knows if that was slow?)? Don't take it really serious it's just my simple thinking :>

    Btw. I don't like Macs, but I write Mac OS. When I was younger I wrote mac os, windows and Linux. but it was funny.
    Certain results of those test are extremely slow even when compared to other distro's. Notibly the sqlite and bonnie. Now I don't know if Michael did a clean install on both systems and did not dualboot which can attribute the slower results. Drive speeds are not consistant across platters, one would have to do a solo install of both to get good results. You don't like Mac OS, that's fine. Too many people tend to go into disbelief because of fanboyism and can't give credit where credit is due and rather come up with some other fanatical reason for the results.

    Leave a comment:


  • kraftman
    replied
    Originally posted by deanjo View Post
    Still, linux will be playing catch up for a few years after 10.6 comes out.
    You meant X, right? In many other things Mac OS just looks lame in comparison to Linux. In my opinion KDE4 just kills Mac OS desktop and that's what simple user sees. He doesn't know if system is using X or Quartz and it's meaningless for him. Can you explain me what's so cool in Quartz?

    Many SCSI/SATA drives do not support fsync (although claim that they do).
    Maybe drive in the benchmark does not support fsync and Ubuntu tryes to do it other way - by emulation etc. and that's why it's slow (but who knows if that was slow?)? Don't take it really serious it's just my simple thinking :>

    Btw. I don't like Macs, but I write Mac OS. When I was younger I wrote mac os, windows and Linux. but it was funny.

    @glasen

    Ubuntu 8.10 is faster than previous versions in my opinion, so I recommend you to not take those results very seriously and have fun with your favourite system.
    Last edited by kraftman; 11-13-2008, 02:54 PM.

    Leave a comment:


  • deanjo
    replied
    Originally posted by Thetargos View Post
    I do think that what is needed in X (not only for Linux, but for any major Unix-like OS that relies on it for graphics [BSD, Irix, Solaris, etc]) is it has to undergo a serious and deep clean up process and optimization of old, convoluted and cumbersome routines that can be accelerated. I know that for the most part focus in this area (especially for the desktop) is in 3D and improved 2D acceleration and whatnot, and that this clean up and optimization of X doesn't necessarily mean to drop support for ancient hardware... Though it won't be an easy process. As far as I know, ever since the departure of pretty much all Linux distributions from XFree86, this clean up has been going on in Xorg, and while things have indeed improved, much more work is needed, and I'm sure that most of it is indeed underway, but it won't happen fast. It has been quite amazing how fast things have moved in the Free software end ever since there has been more focus on Linux on the desktop, maybe things will improve fast enough (then again, maybe not).


    There are a number of things that simply won't happen in Linux, not in the traditional way, anyway, like HD video acceleration in the main Xorg tree, simply because 'HD' has implicit the use of DRM technology to "protect" the stream; however that doesn't mean that for example, Theora streams of 1920x1080 couldn't be accelerated on the graphics hardware (decoding, scaling, etc) and still be awesome. The problem with all things 'HD' is the vagueness of the freaking term. What is HD for one person may not be for another... For example, take the studios, for instance... HD would be any video stream of 856x480 and up (to 1920x1080), however for the studios 'HD' also mean that the content should be protected in some way, and hence implies the use of DRM, and the bulk of the consumers and even computer-oriented people, HD also implies certain formats (like BlueRay, AVC1, h264, WMVHD, etc), in short there's general confusion. So what if Xorg did support "HD" in XV and XVMC? Most likely would be in such a way, that not many would expect (like the afore mentioned Theora decoding support and scaling, etc)

    Is good to see Apple push the envelope and bring a computer-centric solution to the whole media, HD and entertainment conundrum, after all that's pretty much the focus of their business nowadays. Let us hope that Linux desktops can also be part of a similar solution, or why not? Innovate upon the initiative.
    Your quite right when people often confuse HD playback with playback of DRM HD media. The thing is that there are plenty of other valid reasons for non DRM HD playback such as FTA HD recordings, personal HD camcorders, etc etc. The main concern for linux however is just the plain ability to play such non-DRM HD media with good results.

    Accessing the AES engines on the cards should be a "I'm bored and have nothing else too do" after thought project or leave that up to the commercial software crews. (Although using that engine would be kind of cool for hooking into things like openSSL and such). The decryption actually takes very little cpu usage and shouldn't be a concern at all for the X crews. Let DVD Jon worry about that stuff. All X should be concerned about is providing a good standardized API to allow video playback to be offloaded to the GPU where it belongs.

    Leave a comment:


  • Thetargos
    replied
    Originally posted by deanjo View Post
    It's a pretty massive enhancement, not a complete rewrite that's for sure with more focus being put on openCL, Grand Central, and HD video acceleration (at least it wasn't when I left Apple a couple months back and the HD acceleration can be found on the new Macbook line). Still, linux will be playing catch up for a few years after 10.6 comes out.
    I do think that what is needed in X (not only for Linux, but for any major Unix-like OS that relies on it for graphics [BSD, Irix, Solaris, etc]) is it has to undergo a serious and deep clean up process and optimization of old, convoluted and cumbersome routines that can be accelerated. I know that for the most part focus in this area (especially for the desktop) is in 3D and improved 2D acceleration and whatnot, and that this clean up and optimization of X doesn't necessarily mean to drop support for ancient hardware... Though it won't be an easy process. As far as I know, ever since the departure of pretty much all Linux distributions from XFree86, this clean up has been going on in Xorg, and while things have indeed improved, much more work is needed, and I'm sure that most of it is indeed underway, but it won't happen fast. It has been quite amazing how fast things have moved in the Free software end ever since there has been more focus on Linux on the desktop, maybe things will improve fast enough (then again, maybe not).


    There are a number of things that simply won't happen in Linux, not in the traditional way, anyway, like HD video acceleration in the main Xorg tree, simply because 'HD' has implicit the use of DRM technology to "protect" the stream; however that doesn't mean that for example, Theora streams of 1920x1080 couldn't be accelerated on the graphics hardware (decoding, scaling, etc) and still be awesome. The problem with all things 'HD' is the vagueness of the freaking term. What is HD for one person may not be for another... For example, take the studios, for instance... HD would be any video stream of 856x480 and up (to 1920x1080), however for the studios 'HD' also mean that the content should be protected in some way, and hence implies the use of DRM, and the bulk of the consumers and even computer-oriented people, HD also implies certain formats (like BlueRay, AVC1, h264, WMVHD, etc), in short there's general confusion. So what if Xorg did support "HD" in XV and XVMC? Most likely would be in such a way, that not many would expect (like the afore mentioned Theora decoding support and scaling, etc)

    Is good to see Apple push the envelope and bring a computer-centric solution to the whole media, HD and entertainment conundrum, after all that's pretty much the focus of their business nowadays. Let us hope that Linux desktops can also be part of a similar solution, or why not? Innovate upon the initiative.

    Leave a comment:


  • glasen
    replied
    Hey Michael, will you please rerun the Ubuntu vs. Fedora Benchmark after this benchmark results?

    I mean, the Mac Mini has nearly he same CPU (Core2Duo with 1.87Ghz, Mac Mini has more 2nd level cache) and the same amount of memory as your Lenovo T61 in the older tests and magically Ubuntu 8.10 gets results twice as fast as in the old tests. And they are nearly the same as the ones from Ubuntu 7.04 and 7.10.

    Does now someone believe that the bad results of the older tests have nothing to do with "Ubuntu is getting slower!".

    Leave a comment:


  • deanjo
    replied
    Originally posted by drag View Post
    Oh, and I am open to Mac OS actually being faster then Linux in a lot of ways.

    But I just need more information before I can really make a judgement call.

    If I had Mac OS around right now I'd play with it and try to verify the articles' numbers, but I don't.
    Just as a FYI, sqlite does not use fsync for OS X's HFS+. It uses fullfsync. At that point if a write is not complete it's entirely the hd caches fault as it is what is reporting that the write is complete. You can easily verify this by looking at the os_unix.c file in sqlite. Why Apple created f_fullsync is simple, data integrety. Many SCSI/SATA drives do not support fsync (although claim that they do).

    Right from OS X man page.
    DESCRIPTION
    Fsync() causes all modified data and attributes of fd to be moved to a
    permanent storage device. This normally results in all in-core modified
    copies of buffers for the associated file to be written to a disk.

    Note that while fsync() will flush all data from the host to the drive
    (i.e. the "permanent storage device"), the drive itself may not physi-
    cally write the data to the platters for quite some time and it may be
    written in an out-of-order sequence.

    Specifically, if the drive loses power or the OS crashes, the application
    may find that only some or none of their data was written. The disk
    drive may also re-order the data so that later writes may be present
    while earlier writes are not.

    This is not a theoretical edge case. This scenario is easily reproduced
    with real world workloads and drive power failures.

    For applications that require tighter guarantess about the integrity of
    their data, MacOS X provides the F_FULLFSYNC fcntl. The F_FULLFSYNC
    fcntl asks the drive to flush all buffered data to permanent storage.
    Applications such as databases that require a strict ordering of writes
    should use F_FULLFSYNC to ensure their data is written in the order they
    expect. Please see fcntl(2) for more detail.
    As far as the bonnie++ test goes, concern about the it not completing a write is addressed by using the -b option to force a write after every fsync() and a fsync of the directory after a file has been created or deleted. Not as an ideal solution as using fullsync as if the drive again doesn't support fsync() properly the potential for dataloss is still there.

    That being all said, you can guarantee that the sqlite tests are accurate as far as OS X is concerned. If the the Ubuntu using fsync() in sqlite you do not have anywhere close to that assurance.
    Last edited by deanjo; 11-12-2008, 02:16 AM.

    Leave a comment:


  • drag
    replied
    Oh, and I am open to Mac OS actually being faster then Linux in a lot of ways.

    But I just need more information before I can really make a judgement call.

    If I had Mac OS around right now I'd play with it and try to verify the articles' numbers, but I don't.

    Leave a comment:


  • drag
    replied
    Originally posted by Thetargos View Post
    The applications ARE the same, even built from source on both platforms, and the options parsed by the PTS are also the same. The differences in the I/O tests, are not fault of the apps or the benchmarking tool to measure their performance, but rather how the individual systems handle the petitions being made by the applications (case in point, the SQL tests). What you are saying is more like comparing a BMW V8 engine to a Dodge HEMI engine. The HEMI can "shut off" four cylinders when they are not in use, hence being more fuel efficient, and may even have a bit of lag to power those up for when they're needed (acceleration lag, if you will). The BMW engine on the other hand, since all 8 cylinders are all active all the time, will not have said lag when accelerating[1]... Its inherent to the ENGINE, not the fuel used, or the road the cars are used.


    Well actually the point of the benchmark is to remove the subjective element of the platform and try to approach it with a more 'scientific' bent.

    Bonnie++ is a nice benchmarking tool and comparing file systems is very difficult. Much more difficult then it seems at first blush. Which is why, if your doing FS benchmarks, it's a necessity to see the configuration and full output from the application. It's not like benchmarking a application or a game were you have the same app on both systems.

    Our goal here is not to benchmark the benchmarking application, but to measure the performance of the OSes relative to one another.

    Lets see a little bit more into this:

    So, and I don't know this for absolute certainty, that Mac OS X lies to you about whether or not a 'sync' ended successfully. But lets assume that it does.

    The point of 'sync' or similar system calls is that the OS does it's best to make sure that your data is written out to the disk. This is important for a number of reasons, but mostly in case something bad happens to your system. It's a preventative measure that exists to protect your data.

    So if the system is lying about what it's doing in order to make you think that it's faster then it really is, then it's putting your data at higher risk of corruption. That is unless there is some miracle of computer science that Apple figured out or whatnot.

    That is.. if you didn't think your data was important enough to have a sync done correctly then what is the point of having sync in the first place and why use it at all?

    -----------------------

    And lets look at Bonnie++ again.


    It's designed to do performance analysis of file systems and harddrives....

    Most operating systems don't immediately write out data to your harddrive. This is because harddrives are very very slow and memory is fast. So if you can access and write data only to main memory then that's like having a SDRAM-based harddrive to access your data.

    Of course if the power goes out, then that's lights out for your data; corrupted and gone forever in a few seconds. (hence the fsync/sync stuff)

    So unless your using data sets that your sure are getting written to disk then your not really testing anything but memory access.


    -----------------

    To put it another way. Bonnie++ has, as one part of it's test, random data access benchmarks. So the idea is that it writes out a bunch of data then immediately begins to read it back in.

    In real-world situations that sort of behavior is pretty pathological. It's fairly odd to write out data then immediately read in random bits of it. You may do it for a database or you may need to do it for editing videos or whatnot... but even then the likelihood that the data at some point gets flushed out to disk is very high. So what your really interested in knowing is how well does the file system deal with multi-user or multi-tasking performance, as well as application start up times, and that sort of thing. And most of those things involve reading in stale data from the disk and file systems. Most of those things involve using data sets larger then your main memory.

    I mean at some point your going to be reading data randomly from a disk. No matter what. It's why you have a disk. So you want to know actually how well that performs. So unless bonnie++ actually writes out to disk then you have no way of knowing how the system is going to perform under those circumstances.


    --------------------------

    If that is to long to read.. then look at this way:

    It's very easy to tune a system's performance to favor benchmark results. But the end users are going to have their performance sacrificed as a result because real-world usage rarely follows benchmarks, especially FS benchmarks. If you wanted real benchmarks then they would probably involve several weeks of testing per configuration. No fun, too expensive.

    So bonnie++ can be useful and can provide useful information, but not when all you see is a couple graphs. That's not enough, unfortunately. Depending on what your actually testing different numbers can be configured.

    For example maybe your goal is really testing file system cache performance. That's worthy goal and it may expose bugs or other issues. So you use small datasets and try to keep the benchmarks small enough that nothing gets written.

    But for a end user that isn't as interesting as actual performance you'll get when reading or writing data to the drive, unless it's very bad or something.

    Leave a comment:


  • deanjo
    replied
    Originally posted by Thetargos View Post
    Indeed, my point is that Quartz was in development for a long time before OS X debuted in 2001, and it has been extended over the last 7 years, supposedly having a major rework for 10.6 (supposedly, I know it is not a major rewrite or anything, most likely Apple's PR sweet talk).

    It's a pretty massive enhancement, not a complete rewrite that's for sure with more focus being put on openCL, Grand Central, and HD video acceleration (at least it wasn't when I left Apple a couple months back and the HD acceleration can be found on the new Macbook line). Still, linux will be playing catch up for a few years after 10.6 comes out.

    Leave a comment:


  • Thetargos
    replied
    Originally posted by deanjo View Post
    While NeXT's Display Postscript was a predecessor to quartz it does differ in many ways, it was not a direct port over to OS X. In fact it was developed all in house at Apple and no code from NeXT was used in Quartz (OS X caches bitmaps of the window graphics and does not execute postscript).

    Quartz has always been a part of OS X. It debuted in 10.0 (early 2001). Quartz Extreme debuted in 10.2 (early 2003). and Quartz 2D Extreme (Now called QuartzGL in 10.5) debuted in 10.4 (late 2005). It's now close to being 2009 and X still lags behind in many aspects.
    Indeed, my point is that Quartz was in development for a long time before OS X debuted in 2001, and it has been extended over the last 7 years, supposedly having a major rework for 10.6 (supposedly, I know it is not a major rewrite or anything, most likely Apple's PR sweet talk). We all agree that X11 has to evolve, as it has in fact been doing. There are going to be some trade-offs that will have to be made, trade-offs that will mean Linux will be able to do some nice new things, but no longer able to do others (tunneling is most likely to disappear, according to some "experts"). For one thing I really hope that there is still some separation between kernel and graphics systems as that is one of the biggest strengths of Linux and Unix-like systems. So while KMS is a neat thing to have, I really hope a sane distance is preserved between the graphics layer and the kernel/console layer.

    Leave a comment:

Working...
X