Originally posted by mendieta
View Post
Announcement
Collapse
No announcement yet.
Ubuntu 9.04 vs. Mac OS X 10.5.6 Benchmarks
Collapse
X
-
The majority of Linux users today use 64bit operating systems since there are serious perfomance boost issues and no compatibility problems anymore. So why at first place these benchmarks took place with a 32 bit version of Ubuntu I can hardly understand. 64 bit would make the difference in ffmpeg, ogg and lame encoding and I don't speak for tweaking and hacking just a very dekstop choice. Currently the benchmarks shows 17 vs 12 for MacOSX favour while with 64 bit the result would be 14 vs 15 for Ubuntu's favour pretty easily.
A third benchmark with the 64 bit version of Ubuntu is essential imho.
Comment
-
Originally posted by drag View PostHoly crap, NO.
First off.. The kernel OS X uses is NOT A MICROKERNEL.
The OS X kernel is called XNU. It's a so-called 'hybrid' kernel that uses code from a development kernel that died in 1995 combined with BSD stuff. The Mach kernel at different times in it's history was a microkernel and then not a microkernel. OS X does not use a Microkernel.
The Windows NT kernel was another one that was based on a Microkernel design, but is not a Microkernel. Early versions of NT were microkernels, but unfortunately for that design Microsoft could not figure out how to make it scale and the excess overhead caused by the message-passing design doomed it. So later versions of the NT kernel were monolythic.
If you want to you can call them 'hybrid kernels', but I think that is just a made-up term to make the OS kernel sound all microkernel-ish and cool while it is, in fact, a modular monolythic design.
Nope. Not going to happen. Microkernels were essentially a pipe dream and only one Microkernel-based OS actually made it into widespread use. That OS was QNX and was popular for embedded systems due to it's realtime-like nature.
But it wouldn't scale to anything big and nobody wanted to use it as a desktop or server platform.
Comment
-
Originally posted by deanjo View PostUtimately yes, firmware does decide data's fate. If the firmware is giving false responses back then there is nothing a OS can really do to effect that. If that is the case though it should effect all OS's. As a side note there is something drastically wrong when it comes to SQLite performance in Ext3. Switch to XFS and you will get much faster results and Ext3 has been getting slower since around ~2.6.18. It's something I have noticed for a while now.
I did some SQLite tests in KVM and Ext3 is really slow when compared to Ext4:
It probably means Ext4 and HFS+ are using cache and Ext3 isn't. Or Ext 3 just sux
P.S. results are reproducible.
P.S.2 If TeeKee is right (and probably is) this Phoronix benchmark sux a little...Last edited by kraftman; 13 May 2009, 10:51 AM.
Comment
-
Does room for error exist? absolutely! I think more tests need to be done using mac systems for sure! Why not put a MAC server against a debian server and compare how several linux desktop distros performs under a mac system. Put both of them in X86_64 and build linux with the same optimizations you get out of Intel macs, ie. SSE instructions. The Linux kernel doesn't even do things to benefit from this to the best of my knowledge.
But as i see it now. We got our asses handed to us. Is it sad? u better bet it is. But we know we can improve. Excuses are excuses... The whole fedora is amazing thing is a pile because we all know the difference is nominal at best.
We lost... lets not act like 9 year olds and debate that we really didn't. Certainly we can be grown men (and women) and discuss how we can actually make the situation better.
Comment
-
Originally posted by Hephasteus View PostYa I agree with ya on the most of your points. But I'd argue that microkernels are all over the place. In fact. I'd call every single memory and thread manager of every single SQL database system today a microkernel implimentation hybridized to their monolithic OS.
I don't know anything about that.. but I do know that having a threaded model and memory management isn't something that is unique to SQL databases. Pretty much every large multi-threaded application is going to have to management it's memory and threads and such.
Remember what makes a microkernel a Microkernel is that the actual kernel doesn't do anything more then message-passing.
Then various seperate processes 'orbit' that kernel and provides services that the OS can use. Like the 'Hurd' is a collection of programs that provide low-level facilities to a L4 kernel. So you'd have a program that provides access to the harddrive, then another program that provides file system access, then another that provides POSIX APIs, etc etc.
So all the kernel does is then pass messages from one service daemon to another. It has zero functionality beyond that.
And, perversely, Microkernels tend to be hugely complicated. They are usually quite a bit larger then a Monolythic kernel even though they have no functionality built-in besides message handling.
It's pretty obvious why they are not really that successfull if you can step back and look at what they really are.
----------------
Now a modern monolythic kernel, like the Linux kernel, is a big object-oriented, multithreaded monster. Each major feature has it's own thread and there are lot of different small 'kernel level program'-type things that provide services and features that are used by the rest of the kernel. The difference being is that there is no message passing going on and they all occupy the same address space so one can twiddle the other's bits and read each other's memory in a very efficient manner.
This is why proprietary software like Nvidia's drivers that while high performance and have lots of features tend to like to stuff huge amounts of code into the kernel tend to suck. Then the nvidia driver, at any time, can abritrarially access and overwrite any other part of the kernel. If nvidia drivers have a memory overflow or other hicccup it can easily blow away the memory containing... say.. your Ext3 support.
With normal applications each one occupies it's own Virtual Memory space. That is each application see it's own unique address space. To the application all they see is their own virtual 4GB of RAM that they can do with whatever they will. This is the 'virtual' part of Virtual Memory. Each application has it's own VM sandbox and it's very difficult for that application to break out of it's memory sandbox... it can't even see what is going on in the memory of other applications. Now with kernel modules in Linux there is no memory protection features like that and a kernel module can very easily access any other part of the kernel and view and edit any other part of the running kernel.
There really isn't anything that would stop it and is the major design deficiency of a Monolythic kernel.
This is why the video OSS driver model tries to shove as much as the video driver into userspace as possible... the kernel portion is kept as small as possible and the majority of the video processing happens via the DRI2 protocol in userspace.Last edited by drag; 13 May 2009, 12:53 PM.
Comment
-
Thank you for sharing your OS understandings. This has been a very enlightening thread. It's good to see more and more daemons running on linux and I see more clearly that a herd of daemons might not be that great. It'll be interesting watching them hybridize the kernel as it moves to more advanced video handling. Hope they do a good job. But it's looking like a bumpy ride so far. Can't last forever though.
Comment
-
Originally posted by L33F3R View PostDoes room for error exist? absolutely! I think more tests need to be done using mac systems for sure! Why not put a MAC server against a debian server and compare how several linux desktop distros performs under a mac system. Put both of them in X86_64 and build linux with the same optimizations you get out of Intel macs, ie. SSE instructions. The Linux kernel doesn't even do things to benefit from this to the best of my knowledge.
But as i see it now. We got our asses handed to us. Is it sad? u better bet it is. But we know we can improve. Excuses are excuses... The whole fedora is amazing thing is a pile because we all know the difference is nominal at best.
We lost... lets not act like 9 year olds and debate that we really didn't. Certainly we can be grown men (and women) and discuss how we can actually make the situation better.
Comment
-
Originally posted by Apopas View PostTo make the situation better is to make a 32 bit OS to act as if it was 64 bit. You admit defeat only after a fair battle.
That brings up a good plus for Linux, unlike mac it can use a large variety of hardware. Historically problems have erupted with hardware drivers but I have noticed that in recent time the driver situation has been getting alot better. Additionally I can build a $300 computer and play ETQW on high quality with linux; the mac mini is $600 and has moderate HDD/RAM at best.
Dont look at the situation from a linear perspective. I agree more tests need to be done but lets not forget theres room for improvement.
Comment
-
Originally posted by L33F3R View PostYou cant have a fair battle on two different platforms, one of which consists of in-house hardware; apple is going to take the advantage in any OS fight because of this.
Comment
Comment