No announcement yet.

Ubuntu 11.10 vs. Mac OS X 10.7.2 Performance

  • Filter
  • Time
  • Show
Clear All
new posts

  • mitcoes
    Thanks and a precision

    i do like benchmarks, and as MS WOS is a 95% market share, as reference i do like to see it benchmarked.

    I posted - shared - your entry at G+ from g reader and the comment I received was exactly that what about MS WOS?

    Not at this article, my suggestion is to make another database - as there are several at many sites as Tom's hardware, with a synthetic benchmark with a mac mini base, in order to benchmark several Linux / OSX / MS WOS systems and hardware configs.

    This database can become an excellent tool for geeks or tech users to decide what to buy - probably installing Linux OSs if you are a phoronix reader - but having visits from MS WOS users in order to select their hardware, and perhaps, a way of making some of them at least try any Linux OS because of some benchmark.

    It was not a critic, I do enjoy a lot Phoronix, and I read almost all the posts and at least the beginning of all of them. It was a suggestion, sorry, english is not my first language, and I am not as polite as i would like to asking suggestions.

    As I am a benchmark lover I do like as more data better and as more easy to read to everybody, better, i do know how to read, and I remember older benchmarks, but as i love stats - I studied also 4 stats at my economics career - the maximum credits I can choose - I do like them to be clear for people that are not so in love with maths.

    A well known product as Mac mini would be a great KISS tool, normal people would understand more any benchmark in terms of 25% faster than a mac mini or 3x a mac mini.

    i do not like monolitic benchmarks, you know that if you make 30 tests you wiil have different results, and an average and an standard deviations and that is important, it is not the same an average result with a low sd than other with high sd, benchmarks use to show averages but not sd of the tests.

    And i would like you to have some income from advertisers, at least at the openbenchmarks page. and of course with more than 700 followers at G+ many of them enjoy some sharing I do from some of your entries, but I select the more for "general public". even i have a lot of techie following me this page need in benchmarks an aditional effort to be easily understand by non techie people that likes benchmarking.

    And sorry again I'm rare, I love stats and I love computer science - also computer science engineering studied but not finished, for pleasure, not for work - and I like to share, and sometimes Phoronix articles are only for very techie people when with a little twist without loosing accuracy it can be enjoyed by more people.

    That is all, and of course thanks I enjoy you a lot, i only want other people to enjoy it too and you to have more audience / visits.

    Leave a comment:

  • drag
    Originally posted by mitcoes View Post
    Put to every test a weight, and test MS Windows OSs, XP, XP 64bits, 7 32bits, 7 64bits, Ubuntu, Fedora, Arch and Sabayon 32 and 64bits
    The point of doing OS X versus Linux is because they are both Unix systems that run the same software with pretty much the same compilers and build environment.

    The problem with doing these benchmarks is that they are very inconclusive. The more variables you throw into the mix the more useless they become. In order to understand and interpret them correctly you must have a fairly deep understanding of what the benchmarks actually accomplish, why they are different between Linux versus OS X. Then taking that knowledge you have to apply it to how your own applications function and such.

    All sorts of factors like compiler revision, make flags, differences in caching tendencies in file systems... small random file performance versus large random or sequential performance, cpu cache utilization patterns, and all sorts of dozens of hundreds of other details. Micheal has not attempted any sort of analysis so it leaves a great deal of work up to the reader to actually understand what the graphs actually mean.

    Throwing Windows into the mix would produce little more then gibberish.

    Anyways: nobody should give a crap about anything other then Windows 7 64bit, Linux 64bit, and OS X 64bit.

    Leave a comment:

  • mtippett
    Hi mitcoes,

    Although these results may not be the ones that Michael used, they look to be related. If you look at any result page on OpenBenchmarking, most of them have the link in the "OPC" tab which will give you heatmaps for the tests.

    These provide some interesting information which would appear to cover a lot of what you are interested in. Namely it provides you guidance on the relative performance against other systems that have run the test in the OpenBenchmarking database.

    Broadly the heatmaps have shading representing the frequency of a particular value. There are also two black vertical lines which represent the 33 and 66 percentile ranks. This allows you to say "slow", "medium", "fast". Glancing at the results, I see the following.

    Disk performance on these systems are generally slow.
    Memory performance on the systems is actually quite fast.
    The CPU performance is okay to fast, although the compiler has a pretty strong affect on the particulars.

    There is also the small hardware link at the top that leads you to This is a generalized report showing where different products lie relative to their general performance characteristics. Given that the result creation isn't managed and audited, we are relying on a crowd-sourced set of results to driver the performance gradings of the individual pieces of hardware.


    Leave a comment:

  • mitcoes
    I would like to see at the article OpenGl and Opencl tests too

    I think some video tests must be done. At least Ubuntu +open source video drivers and Ubuntu + proprietary video drivers

    i also think this machine: the Apple mac mini is a good one to take as base 100 to make a Openbenchmark global score.

    Put to every test a weight, and test MS Windows OSs, XP, XP 64bits, 7 32bits, 7 64bits, Ubuntu, Fedora, Arch and Sabayon 32 and 64bits

    And in Linux open source video drivers and proprietary ones.

    And adding GPUs as ATI and Nvidia to the equation.

    This base 100 can be changed every Mac mini release to base 100 and publishing the inflation index model to model - MAc Mini 2012 100, future Mac Mini 2013 100 /base 2013 and hypothetical 150 / base 2012.

    Even you must put real results, making graphs to a base 100 as first benchmarks with original IBM AT 8086, would be easy to read and understand, and being this product compatible with OSX + MS WOS +Linux from Apple OSX products that can be measured in x times Mac mini performance to Wintel ones and of course Linux ones, having this machine as 100 or 1000 base would be an excellent reference point to benchmark with at every test you publish.

    Even to compare tablets and smartphones where Arch, Fedora, Ubuntu and Sabayon have versions for ARM and of course future Intel SoCs and same tests can be done at IOS products, having a database with the unified openbenchmark number as 3dmark or futuremark in order to benchmark a product versus other one, even you can go to the table to see all the benchmarks to see differences between similar products, and also a price/performance data in order to choose the best product for your pocket choose.

    Of course different database for SoC products, laptops, closed brand desktop products, and in desktops by parts, by motherboards / processors / ram / and SDDs/HDDs and even Monitors.

    Last but not least where less is better the partial score must be a inverse function - 100 vs 75 result must be 100/75 = 133 score

    Leave a comment:

  • oleid
    Isn't SciMark a Java based benchmark? No wonder they perform the same for clang and gcc-4.2 on MacOS.

    Leave a comment: