Ya.
Without more information it's hard to tell exactly what is going on with these benchmarks.
It's pretty plain that Linux OSS drivers are getting trounced by OS X. This is not surprising given the immaturity of the platform. (and X.org is a old platform. It's just very retarded, I guess. Lets hope it's a late bloomer)
The way I see it right now the developers for a long time were just trying to get the stupid thing run stable. After all we have no less then 3 graphics drivers operating on a single peice of hardware at any one time... your VGA or framebuffer console drivers. Your 2D DDX drivers and then your DRI/DRM drivers.
No less then 3 different projects with different approaches and ideologies working together. Linux kernel developers (DRM/VGA/Framebuffer), X.org drivers twiddling bits around on the PCI bus (DDX drivers), and then DRI drivers from Mesa and Xorg folks.
Just getting it to run was a challenge.
Hopefully with the modernization of the driver model and improvements to the X server we can start to see them concentrating more and more on performance.
Especially when they get the attention of the Linux kernel developers, which are all about performance, things should start to shape up.
For example:
It may be locked for now unless your a subscriber, but basically it's talking about taking a memory subsystem designed for allowing high-memory (above 1GB) to be used efficiently in 32bit systems and applying it to graphics memory management.
It lead to a 18x improvement in Quake3 performance and went from 85FPS to 360FPS on Glxgears.
Of course that was with the development memory-managed driver and not the ones anybody is using now (the ones in production are better optimized)
-----------------------
As far as the rest of the benchmarks in this article it's very difficult to make a solid conclusion about them.
With all these things they are using the same code compiled with the same compiler in both OS X and Ubuntu. So while interesting, it would be more interesting to investigate and determine exactly why the benchmarks get the results they get.
So it's important that readers be given settings and details so that they can see for themselves and accurately recreate what is being shown.
I suppose I am missing were this is laid out, so if anybody can help me I would be very greatful.
For example it can be a performance bug in Ubuntu. Maybe a compiler mis-setting or kernel bug is limiting performance. Maybe it's something easy to fix that could lead to a vast improvement in performance.
or it could be that the benchmarks are not being used properly and readers could point out improvements or changes that can help Phoronix improve how it reports things and the benchmark suites it's using.
I donno.
Without more information it's hard to tell exactly what is going on with these benchmarks.
It's pretty plain that Linux OSS drivers are getting trounced by OS X. This is not surprising given the immaturity of the platform. (and X.org is a old platform. It's just very retarded, I guess. Lets hope it's a late bloomer)
The way I see it right now the developers for a long time were just trying to get the stupid thing run stable. After all we have no less then 3 graphics drivers operating on a single peice of hardware at any one time... your VGA or framebuffer console drivers. Your 2D DDX drivers and then your DRI/DRM drivers.
No less then 3 different projects with different approaches and ideologies working together. Linux kernel developers (DRM/VGA/Framebuffer), X.org drivers twiddling bits around on the PCI bus (DDX drivers), and then DRI drivers from Mesa and Xorg folks.
Just getting it to run was a challenge.
Hopefully with the modernization of the driver model and improvements to the X server we can start to see them concentrating more and more on performance.
Especially when they get the attention of the Linux kernel developers, which are all about performance, things should start to shape up.
For example:
It may be locked for now unless your a subscriber, but basically it's talking about taking a memory subsystem designed for allowing high-memory (above 1GB) to be used efficiently in 32bit systems and applying it to graphics memory management.
It lead to a 18x improvement in Quake3 performance and went from 85FPS to 360FPS on Glxgears.
Of course that was with the development memory-managed driver and not the ones anybody is using now (the ones in production are better optimized)
-----------------------
As far as the rest of the benchmarks in this article it's very difficult to make a solid conclusion about them.
With all these things they are using the same code compiled with the same compiler in both OS X and Ubuntu. So while interesting, it would be more interesting to investigate and determine exactly why the benchmarks get the results they get.
So it's important that readers be given settings and details so that they can see for themselves and accurately recreate what is being shown.
I suppose I am missing were this is laid out, so if anybody can help me I would be very greatful.
For example it can be a performance bug in Ubuntu. Maybe a compiler mis-setting or kernel bug is limiting performance. Maybe it's something easy to fix that could lead to a vast improvement in performance.
or it could be that the benchmarks are not being used properly and readers could point out improvements or changes that can help Phoronix improve how it reports things and the benchmark suites it's using.
I donno.
Comment