Originally posted by allquixotic
View Post
Announcement
Collapse
No announcement yet.
TitaniumGL: A Faster Multi-Platform Graphics Driver Architecture?
Collapse
X
-
Originally posted by Ansla View PostI tried activating kwin compositing on my Eeepc 901 before it broke down completely, turned the entire desktop into a slide show. Unless the "standard" improved significantly over the Eeepc 901, Intel notebooks just can't handle compositing.
1. The memory in the netbook is slower/cheaper than laptop memory. It's DDR2, where you can expect DDR3 on a real laptop.
2. The Atom processor could be a bottleneck, since the CPU is heavily involved in graphics processing for such a small device. Atom processors are pathetic -- they don't hold a candle to something as "revolutionary" as a first-gen Core 2 from ~2007.
3. The mobo used is going to have a slower FSB and less bandwidth for transferring data between the GPU and the CPU and RAM.
I have a ThinkPad X61 which uses an Intel integrated GPU which is one generation newer than yours -- instead of 945, it's 965. By today's standards, it's still quite old. The CPU is a low-power Core 2 Duo. Even with these fairly pedestrian specs, I get extremely smooth performance with any compositor I can throw at it on Windows or Linux: Aero, Mutter, Cinnamon, Kwin, whatever. To put it simply, this is a 3 or 4 year old integrated graphics chip that's now 2 micro-architecture generations behind (it'll be 3 generations behind when Ivy Bridge hits later this year) and compositing is simply perfect.
Granted, any more advanced 3D on this hardware is going to get badly bogged down. But hardware-accelerated compositing for workloads like web browsing and office applications, works GREAT.
So saying "Intel's hardware makes compositing a slideshow" is a gross understatement. I can only imagine how fast it is with Sandy Bridge graphics, which you can buy today; or Ivy Bridge graphics, which you'll be able to buy later this year.
Leave a comment:
-
Originally posted by Kano View PostWell maybe take a look at the OpenGL version string of a standard Intel netbook...
Leave a comment:
-
Originally posted by Kano View PostWell maybe take a look at the OpenGL version string of a standard Intel netbook...
Leave a comment:
-
Well maybe take a look at the OpenGL version string of a standard Intel netbook...
Leave a comment:
-
Originally posted by smitty3268 View PostI don't think that's accurate. Have a link?
What I understood, was that he wants the code kwin uses to run in the core profile without having to use any of the functionality that was deprecated in 3.x. So that it's simple to use the same code in GL2, GL3, and GL ES contexts. I don't think I ever heard of a plan to get rid of GL2 support, though.
It still has OpenGL 1.5 support for crying out loud, and he's only NOW talking about killing that off. OpenGL 1 is ancient and it's pretty hard to justify continuing to support it when almost nobody has hardware that old and its only other job is to be there to work around AMD's utterly craptastic proprietary driver.
I'm not sure the proprietary "TitaniumGL" malware (it opens sites full of ads every time something invokes it) which isn't even properly compliant with OpenGL 1 is even worth paying any consideration to.
Leave a comment:
-
Originally posted by allquixotic View PostSome people, e.g. the kwin maintainer, even wants to require GL 3.x support, though I imagine that's still a little ways off from being implemented (in the sense that the GL 1.x AND 2.x renderers will be removed)
What I understood, was that he wants the code kwin uses to run in the core profile without having to use any of the functionality that was deprecated in 3.x. So that it's simple to use the same code in GL2, GL3, and GL ES contexts. I don't think I ever heard of a plan to get rid of GL2 support, though.
Leave a comment:
-
Originally posted by vertexSymphony View PostWhen I referred to complexity I was talking about computational complexity ... having fixed functions with only a couple of parameters tunables is way more easier to optimize and to make shorcuts than having more stages in the pipeline, and those not being "fixed", but programmable (specially 3.x, that's designed around shaders)
Of course is going to be fast with OpenGL 1.x, when he implements 2.x or better yet, 3.x then I'll get amazed if he gets a similar performance to this or llvmpipe.
but yes, this does have a niche use case, but there's too much hype for what this actually is.
Regards.
The other aspect that reduces the utility of any potential optimizations you'd see in TitaniumGL is that these optimizations won't work for any hardware accelerated 3d paths on modern cards. Modern cards are all fully programmable and likely do not even have fixed function code paths, so the optimization techniques are completely different in order to achieve respectable GL 2.x performance on, say, Nvidia Fermi (GTX400 series) or ATI Evergreen (HD5000) or later. And since GL 3.x / 4.x is much newer, it's pretty much a trade secret within the halls of ATI and Nvidia as to how to make these new APIs fast.
What I'm saying is that, in the best case, TitaniumGL would be open sourced and we could draw some useful algorithms from it, for doing GL 1.4 on the CPU. Since these algorithms likely wouldn't be compatible with LLVM, they would only be able to affect the performance of the softpipe driver. And we'd have to somehow re-architect softpipe to use these faster algorithms for GL 1.4, while switching over to the slow old way for GL 2.x, because it's programmable, and TitaniumGL offers no advice on how to optimize a programmable pipeline.
This best case is not reality, though, because it's closed source and likely doesn't offer any truly novel algorithms.Last edited by allquixotic; 11 March 2012, 05:41 PM.
Leave a comment:
-
Originally posted by Geri View PostvertexSymphony: well, actually, writing a programmable pipeline is mutch easyer.
Of course is going to be fast with OpenGL 1.x, when he implements 2.x or better yet, 3.x then I'll get amazed if he gets a similar performance to this or llvmpipe.
but yes, this does have a niche use case, but there's too much hype for what this actually is.
Regards.
Leave a comment:
Leave a comment: