Announcement

Collapse
No announcement yet.

Clear Linux Exploring "libSuperX11" As Newest Optimization Effort

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by techzilla View Post
    I wouldn't oppose it, because linked libraries are not really modular anyway. They're not like unix utilities, and they are not at all decoupled, so it really is just a change to how the code is delivered. The only concern would be if consolidation cut flexibility for the distribution, if they plan to give the lib a solid build and configuration system, I'd think it would be a positive move.
    that argument is simply invalid.

    Comment


    • #22
      Originally posted by cj.wijtmans View Post
      what an awful idea.
      .. because?

      10 small shared libraries that you always link to versus 1 slightly larger shared library... the small case is better because?

      Comment


      • #23
        Originally posted by AdamOne View Post
        good, let intel spend more cash on developing Linux, and let the community reap the rewards.
        Lin-Lin, I say!
        Fixed that for you

        Comment


        • #24
          Originally posted by stingray454 View Post
          I'm all for optimizations and improvements, but this sounds like an awful lot of work for little to no benefit, especially considering that applications need to add support for it. I can't imagine the benefits being noticeable from this, but I might be missing something. If I'm wrong, I'll happily congratulate them though

          It would be really interesting if they did some benchmarks, comparing the same application on regular X11 vs this approach to give an idea of what to expect.
          Of course such optimizations require the whole system to be fine-tuned. Your average joe sixpack ubuntu loads all possible drivers and daemons, spending over a minute just loading and launching stuff. OTOH a fine-tuned NUC system can start the UEFI firmware in few seconds, kernel in less than 1,5-2 seconds, and boot to desktop in less than one second, maybe even faster. In this case cutting off all bloated crap matters.

          Imagine a kiosk PC (maybe some service screen at airport or something), booting in 2-5 seconds vs 1 to 10 minutes. Which one is better? Imagine your flight leaves in 10 minutes. Do you seriously have the time to read some boring boot logs? Stock Ubuntu looks more than amateurish in such appliances.
          Last edited by caligula; 07 January 2019, 01:25 PM.

          Comment


          • #25
            Originally posted by arjan_intel View Post

            .. because?

            10 small shared libraries that you always link to versus 1 slightly larger shared library... the small case is better because?
            no because. If you are going to put effort into making optimizations you actually have to actually make a valid case to do it, not the other way around. Its a fallacy to over optimize everything, especially in this case imho. I do remember my days i was an over optimization freak. C++ had no move semantics back then and i showed on forums how a custom string class was 100 times faster than a stl string. However i was putting a lot of effort into coding an entire STL and cross platform ui library ... because optimizations, eventually achieving absolutely nothing. OH and those optimizations that could easily be made by the compiler.. And those days of ripping windows XP apart... replacing NT programs with smaller ones and deleting every REGKEY i could... those were the days.

            Comment


            • #26
              Originally posted by coder View Post
              If you're really performance-minded, then I'd imagine you'd do better with a more modern and streamlined library atop Vulkan than OpenGL.

              I'm waiting for the dust to settle in the realm of Vulkan wrappers. Once there are a handful of popular ones, I'll look at them more seriously. I doubt I'll miss OpenGL.
              I'd rather not have to rewrite every OpenGL application out there. I think a good embedded OpenGL implementation is a lot of the way there, without the trouble of porting. I mean, porting to Vulkan from OpenGL (in a dumb way) is not that much trouble, but it is some.

              Comment


              • #27
                Originally posted by microcode View Post
                I'd rather not have to rewrite every OpenGL application out there. I think a good embedded OpenGL implementation is a lot of the way there, without the trouble of porting.
                Sure, I'm all for that. Just saying that I don't know how much more efficiency you can expect to wring while still keeping OpenGL's API.

                Comment


                • #28
                  Originally posted by cj.wijtmans View Post

                  no because. If you are going to put effort into making optimizations you actually have to actually make a valid case to do it, not the other way around. Its a fallacy to over optimize everything, especially in this case imho. I do remember my days i was an over optimization freak. C++ had no move semantics back then and i showed on forums how a custom string class was 100 times faster than a stl string. However i was putting a lot of effort into coding an entire STL and cross platform ui library ... because optimizations, eventually achieving absolutely nothing.
                  Maybe your implementations didn't achieve anything, but the C++ guys have acknowledged that such semantics are crucial so they tuned the whole language to support just that. I can imagine problematic utilities like the ESR's reposurgeon. It would really pay off to have an optimized language and libraries in those cases.

                  Comment


                  • #29
                    Originally posted by arjan_intel View Post
                    .. because?

                    10 small shared libraries that you always link to versus 1 slightly larger shared library... the small case is better because?
                    Because most people have absolutely no idea how stuff works on this forum and still comment.

                    If all of the libraries are needed in a typical configuration then it makes no sense at all to split it up. So I hope you are successful with this little optimization project, we need people like you (and Intel) instead of naysayers who don't contribute shit to anything.

                    Thanks for all you do!

                    Comment


                    • #30
                      Originally posted by cj.wijtmans View Post

                      no because. If you are going to put effort into making optimizations you actually have to actually make a valid case to do it, not the other way around. Its a fallacy to over optimize everything, especially in this case imho. I do remember my days i was an over optimization freak. C++ had no move semantics back then and i showed on forums how a custom string class was 100 times faster than a stl string. However i was putting a lot of effort into coding an entire STL and cross platform ui library ... because optimizations, eventually achieving absolutely nothing. OH and those optimizations that could easily be made by the compiler.. And those days of ripping windows XP apart... replacing NT programs with smaller ones and deleting every REGKEY i could... those were the days.
                      But he just have you a valid case right there (LTO among others).

                      Comment

                      Working...
                      X