Originally posted by techzilla
View Post
Announcement
Collapse
No announcement yet.
Clear Linux Exploring "libSuperX11" As Newest Optimization Effort
Collapse
X
-
Originally posted by stingray454 View PostI'm all for optimizations and improvements, but this sounds like an awful lot of work for little to no benefit, especially considering that applications need to add support for it. I can't imagine the benefits being noticeable from this, but I might be missing something. If I'm wrong, I'll happily congratulate them though
It would be really interesting if they did some benchmarks, comparing the same application on regular X11 vs this approach to give an idea of what to expect.
Imagine a kiosk PC (maybe some service screen at airport or something), booting in 2-5 seconds vs 1 to 10 minutes. Which one is better? Imagine your flight leaves in 10 minutes. Do you seriously have the time to read some boring boot logs? Stock Ubuntu looks more than amateurish in such appliances.Last edited by caligula; 07 January 2019, 01:25 PM.
Comment
-
Originally posted by arjan_intel View Post
.. because?
10 small shared libraries that you always link to versus 1 slightly larger shared library... the small case is better because?
Comment
-
Originally posted by coder View PostIf you're really performance-minded, then I'd imagine you'd do better with a more modern and streamlined library atop Vulkan than OpenGL.
I'm waiting for the dust to settle in the realm of Vulkan wrappers. Once there are a handful of popular ones, I'll look at them more seriously. I doubt I'll miss OpenGL.
Comment
-
Originally posted by microcode View PostI'd rather not have to rewrite every OpenGL application out there. I think a good embedded OpenGL implementation is a lot of the way there, without the trouble of porting.
Comment
-
Originally posted by cj.wijtmans View Post
no because. If you are going to put effort into making optimizations you actually have to actually make a valid case to do it, not the other way around. Its a fallacy to over optimize everything, especially in this case imho. I do remember my days i was an over optimization freak. C++ had no move semantics back then and i showed on forums how a custom string class was 100 times faster than a stl string. However i was putting a lot of effort into coding an entire STL and cross platform ui library ... because optimizations, eventually achieving absolutely nothing.
- Likes 1
Comment
-
Originally posted by arjan_intel View Post.. because?
10 small shared libraries that you always link to versus 1 slightly larger shared library... the small case is better because?
If all of the libraries are needed in a typical configuration then it makes no sense at all to split it up. So I hope you are successful with this little optimization project, we need people like you (and Intel) instead of naysayers who don't contribute shit to anything.
Thanks for all you do!
- Likes 1
Comment
-
Originally posted by cj.wijtmans View Post
no because. If you are going to put effort into making optimizations you actually have to actually make a valid case to do it, not the other way around. Its a fallacy to over optimize everything, especially in this case imho. I do remember my days i was an over optimization freak. C++ had no move semantics back then and i showed on forums how a custom string class was 100 times faster than a stl string. However i was putting a lot of effort into coding an entire STL and cross platform ui library ... because optimizations, eventually achieving absolutely nothing. OH and those optimizations that could easily be made by the compiler.. And those days of ripping windows XP apart... replacing NT programs with smaller ones and deleting every REGKEY i could... those were the days.
- Likes 1
Comment
Comment