I think it is pertinent some comments about the graphics library and drivers used on both systems... And maybe illustrate a bit more how do graphics subsystems work on both platforms.
First Mesa Vs Binary Blobs
Mesa is an Open Source implementation (keyword for whatever other concepts exposed here after) of the OpenGL specification, that implicitly means that Mesa is NOT necessarily the same as OpenGL (just as Wine is not Windows, but an implementation of the WinAPI under Unix), and provides some of the functionality OpenGL does. OpenGL is very bureaucratic and is governed by a series of board members (ARB) who agree on a series of basic set of features and functionality the API and library has to provide. OpenGL is deployed on systems through the use of ICD (or Installable Client Driver) which usually gives the whole bunch of OpenGL functionality. This functionality by consensus is provided by a system library, the libGL.so library under Unix systems and OpenGL32.dll under Windows ; and a driver (ICD) which makes use of such library. More often than not, the drivers also include their own optimized implementation of the API as part of the ICD (i.e, their own libGL.so or OpenGL32.dll [or similar]) in order to provide optimized OpenGL rendering. What in fact is happening whenever you test the Open Drivers (Intel, ATi, Matrox, 3dfx, etc) under Linux/FreeBSD/OpenSolaris to any of the binary blobs (nVidia/ATi) is that you are actually comparing apples to oranges. These blobs are indeed an ICD, while the Open Drivers use Mesa as their OpenGL implementation. nVidia, ATi/AMD, Intel, SGI, Sun, Apple, Microsoft (they abandoned the ARB before Vista's release [and the OpenGL 2.0 specification release] IIRC, but I believe they're back), etc are actually members of the Architecture Review Board for OpenGL, what does this mean? Well, simply that the degree of performance you can expect to obtain from the implementation of any of these ARB members would be orders of magnitudes above Mesa (which is not an ARB member [though I believe there are special considerations towards Mesa]), then there is the degree of optimization that the driver coders can implement into the driver using Mesa (remember Mesa, even though implementing the whole OpenGL spec, may not be as optimized as an ARB member ICD).
This is why comparing Mesa Vs a proper ICD usually ends up resulting in Mesa being much slower, if not due to features or system optimization, due to the large number of device architectures that make use of it. And this is another differentiator between Mesa and OpenGL ICDs. Mesa does not require by itself (just as OpenGL doesn't either) to be hardware accelerated, for accelerating Mesa (and OpenGL in Linux) the DRI mechanism was devised. As it name implies the Direct Rendering Infrastructure seeks to attain direct hardware access for accelerating OpenGL through Mesa (or a vendor's ICD) and is where X is more involved in the acceleration process, since by itself X does much of its rendering in an indirect manner (which is part of the core of the problems that it now presents for more sophisticated rendering on the desktop, by the way) and so there are two extra components that help accelerate in hardware the rendering through direct access, the userland DRI library, and the kernel-side Direct Rendering Manager kernel module, responsible for direct I/O to the hardware parsing calls from the DRI library above (carrying Mesa GL commands along to the graphics hardware). This is an overly simplified explanation and I'm sure that any DRI/DRM developers will have a heart attack by reading my explanation of how this works [or I think it works]).
At any rate, a much more "fair" comparison would involve putting together a system that would match in hardware configuration and capabilities vis-a-vis comparable to the MacMini, however using another ICD OpenGL backend instead (like nVidia with an nVidia 7050 IGP or fglrx with an ATi HD3200/790G IGP), then the graphics comparison would be much more leveled (simply due to sheer OpenGL support)
The XServer Vs Quartz.
Again, overly simplified, as noted X11 was engineered to do indirect-over-the-network-through-sockets rendering, and it has excelled at that over the last 24 years. However this robustness, is its Achilles heel for fast desktop rendering. In a nutshell, the client-server architecture of X (which makes it network transparent) has the server running on the local computer, and clients (other machines, programs) connect to it for rendering, using network packets. This is rather convoluted (though it is very convenient for a lot of tasks such as mainframes and centralized rendering, etc), and that's where DRI comes in, it is a system to by pass the XServer and allow applications direct access to the graphics hardware instead. In this model the XServer is still the responsible for the rendering of the desktop, etc. From what I have been able to make out of Quartz (or rather Core Graphics), in its design the render path is much faster as it indeed has direct access to the hardware through the Quartz compositor (kind of like the XServer) and Quartz Extreme (using OpenGL), and has been very much optimized through the use of SIMD instructions (AltiVec, SSE) and hardware acceleration (OpenGL).
The design and architecture of both rendering systems, though not mutually exclusive, do let you see the difference in applications they were thought for. Quartz was built from the ground-up to be a desktop rendering system, while X allows for more distributed rendering and seems to have been one of its original goals. It is not surprising that Apple decided to depart from X when they created OS X and built from scratch (well based on NextStep) Quartz and its compositor, to better suit the desktop needs. They indeed implemented X11, but instead of having X have its own server, it does render through Quartz as the server, providing the protocol and libraries for X client applications. I think that in Linux in the not so distant future, that is going to become the natural evolution for X, from an inherently indirect rendering nature, to a direct rendering one, and keeping backwards compatibility through support libraries. This transition (IMHO) started with the addition of Composite, Damages and other extensions which will have a more centric role in X... Still a LOT of work. What would be better, allow X to naturally transition, or have another renderer like Wayland provide X11 compatibility through a mechanism similar to Mac OS X's Quartz? I don't know, and most likely both systems will coexist for some time before a decision is made in what direction Linux on the desktop is taken.
PS Sorry for the long post.
Announcement
Collapse
No announcement yet.
Mac OS X 10.5 vs. Ubuntu 8.10 Benchmarks
Collapse
X
-
Originally posted by kraftman View PostI thought about that: "Still, linux will be playing catch up for a few years after 10.6 comes out." I quoted other comment, because I was answering to some part of it.
Maybe it will proove that you're wrong?
Sorry, but I don't see it... Btw. I'm not interested in Apple's sweet talk. I'd prefer to see real advantages of Quartz over X.
For a reasonable laymans guide although be it is missing some of the 10.5 changes:
Keep in mind as well that compositing didn't have to be disabled on the mac. It doesn't effect games as it does on linux where it slows openGL games and such.
Leave a comment:
-
Originally posted by deanjo View PostRead the whole tread topic what does it say?
That is the exact quote you quoted.
And what would that prove by using 2 drivers that do not use mesa?Mesa DOES have a huge factor in performance as well. Why do you think blobs bypass MUCH of X?
I posted a link on Quartz and it's subsystems. If you can't be bothered to read it there then why bother?
Leave a comment:
-
Originally posted by kraftman View PostSomeone else started. I thought that you won't continue. And even if I agree, I just don't like your way you said that.
It's not this comment that I was talking about...
Why not compare NVIDIA binary driver under Linux and NVIDIA binary driver under Windows XP? It's more objective in my opinion and if I have little better result on Linux mesa doesn't slow down anything in this case as I mentioned before. You can be sure that screenshots using those binary drivers will look the same. It looks like you sometimes don't know what I'm talking about.
You might want to read up here:
Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite
I'd love to hear more about that and about Quartz, but you didn't said anything special.Last edited by deanjo; 13 November 2008, 07:22 PM.
Leave a comment:
-
Originally posted by deanjo View PostYou started firing guns before you even thought about the comment. which in the end you eventually agree with:
Take the bloody comment in context. Read before you type.
You best look at implementations of drivers on the same OS to draw that conclusion say firelgx vs radeonhd vs radeon and then take snapshots of the screens and do a differential comparision of them on the same frame.
OS X server is there to manage OS X networks. And it does that EXTREMELY well.
Leave a comment:
-
Originally posted by kraftman View PostSimply, Enemy Territory runs faster for me under Linux then under XP. It can be due to drivers, but that's why I think that mesa has no big influence to binary blobs (and NVIDIA does many things their way).
So, we can say that Mac OS need years to catch up with Linux. I should add in server area, but you didn't say before what you meant.
Many people don't. You can say what you want, but firewall disabled by default is little lame.
I'm sometimes quite lazy, but you said that Linux needs years to catch up and that wasn't just correction.
It's a pretty massive enhancement, not a complete rewrite that's for sure with more focus being put on openCL, Grand Central, and HD video acceleration (at least it wasn't when I left Apple a couple months back and the HD acceleration can be found on the new Macbook line). Still, linux will be playing catch up for a few years after 10.6 comes out.
which in the end you eventually agree with:
Btw. we can mention many arguments, but it usually leads to flame. I can agree that Quartz is probably much modern way than X to do some things, but as you said every OS has it's own advantages and disadvantages.Last edited by deanjo; 13 November 2008, 06:02 PM.
Leave a comment:
-
Originally posted by deanjo View Postxorg has been needed until just recently. Mesa DOES have a huge factor in performance as well. Why do you think blobs bypass MUCH of X? The drivers that ubuntu DO use mesa for the intel chipset. Mesa is not used for OS X.
That is their names. I when I spell darwin in small letter too.
As would be expected in a OS that has many years upon a OS that was not marketed for servers until recently
Go right ahead. As far as firewall goes most people have a external firwall anyways whether it's through a router or modem. Glad you keep up with the times.
FYI TheArgos brought up X, I commented and corrected him on a few things. So read the bloody thread before you start accusing of trolling. It seems you cannot keep up with the conversation.
Not a fanboy at all, I use pretty much every OS equally on a daily basis, they all have their weaknesses and strengths.
Leave a comment:
-
Originally posted by kraftman View PostOh, so you're talking mainly about X related things. Xorg is no longer needed in many system configurations (it seems that you're years behind...). Using NVIDIA or AMD/ATI binary drivers Mesa probably has no influence on performance. I wonder if they replace that crap what they call file system or maybe they just give you little more those fine desktop effects?
It's seems that you're fanboy, because you write "linux" and "Mac OS". That's one of the symptoms.
I didn't saw too many Mac OS benchmarks, because people just benchmark more proffessional systems like Linux, FreeBSD and OpenSolaris.
As far as I know Mac OS is just too lame - its crappy file system, security holes (firewall disabled by default - that's very meaningfull) etc. I'd love to find some MySQL and DNS benchmarks...
You started talk about - how X is bad etc. This article is about something else, so stop trolling please.
Not a fanboy at all, I use pretty much every OS equally on a daily basis, they all have their weaknesses and strengths.
Leave a comment:
-
Originally posted by deanjo View PostThe statement was made in general towards X that is correct but also applies for the upcoming technologies that 10.6 is bringing such as Grand Central, openCL, hd acceleration etc. Things like multihead displays, 10,000 xorg configs are not needed in OS X. Plug and play. Even the video drivers are simply handled through OS updates. It's been like that from the start. Linux is just starting trying to enable such functionality and still has a ways to go. I'm sure the fact that the chipset had to use Mesa in linux did not help performance in the games as well.
You don't like Mac OS, that's fine. Too many people tend to go into disbelief because of fanboyism and can't give credit where credit is due and rather come up with some other fanatical reason for the results.
Leave a comment:
-
Originally posted by kraftman View PostYou meant X, right? In many other things Mac OS looks just lame in comparison to Linux. In my opinion KDE4 just kills Mac OS desktop and that's what simple user sees. He doesn't know if system is using X or Quartz and it's meaningless for him. Can you explain me what's so cool in Quartz?
Maybe drive in the benchmark does not support fsync and Ubuntu tryes to do it other way - by emulation etc. and that's why it's slow (but who knows if that was slow?)? Don't take it really serious it's just my simple thinking :>
Btw. I don't like Macs, but I write Mac OS. When I was younger I wrote mac os, windows and Linux. but it was funny.
Leave a comment:
Leave a comment: