Announcement

Collapse
No announcement yet.

Torvalds Is Unconvinced By LTO'ing A Linux Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Pseus View Post
    Flash space, not the runtime footprint, is usually the problem in routers (which usually carry more RAM than Flash).
    Exactly. They can have 4 MB of Flash and 128-512 MB of RAM. Usually might be the case the whole file system is compressed and uncompressed to RAM while booting. Both kernel and root in the flash. The space requirements can be very strict.

    Comment


    • #12
      It looks like that Linus was so mad that he brought kernel.org down

      Your privileges are revoked !

      Comment


      • #13
        Originally posted by Pseus View Post
        Flash space, not the runtime footprint, is usually the problem in routers (which usually carry more RAM than Flash).
        it fits now
        more webgui ?

        Comment


        • #14
          Originally posted by Brane215 View Post
          I....
          I've toyed with it and have seen many weird errors, so I recompiled everything with bog standard "-march=native -O2"
          LTO is interesting as an idea, but with this gcc and binutils... no, thanks. I'll wait for next round of gcc before I try again.
          Actually -march=native is even more problematic than the LTO (depends on platform tho), for example, it generates sse instruction which make context switches slower since it takes a while to set them up, which is pretty bad, especially if you have realtime requirements.

          LTO seems much safer, though any gain for desktop users is doubtful due to the fact that distribution kernels are super robust and have everything in modules, which is presumably least effective use-case for LTO (and yet, most widely used).

          Comment


          • #15
            Liska's thesis was linked in the thread. So soon LTO will allow the compiler to de-duplicate functions. They're also adding it to the linker, as each can catch different parts.

            That saves 5% off Firefox binary size, because C++ generated a few thousand variants of a "increase reference count" function. Each with a different type of pointer, yet compiled to the exact same instructions.

            Comment


            • #16
              Originally posted by curaga View Post
              Liska's thesis was linked in the thread. So soon LTO will allow the compiler to de-duplicate functions. They're also adding it to the linker, as each can catch different parts.

              That saves 5% off Firefox binary size, because C++ generated a few thousand variants of a "increase reference count" function. Each with a different type of pointer, yet compiled to the exact same instructions.
              I understand that 5% off is important in 4 MB flash storage. However in desktop apps it doesn't matter at all. Hard drives are now 1 TB (ssd) and 4 TB (3.5" hdd). You can also set up raid6 or zfs. So you get tens of terabytes and it's very cheap. You shouldn't bother with binary sizes. In fact there's plenty of room for more functionality in Firefox. Luckily they're working hard at implementing more new features with each release.

              Comment


              • #17
                Smaller size benefits all, not just embedded.

                Originally posted by caligula View Post
                I understand that 5% off is important in 4 MB flash storage. However in desktop apps it doesn't matter at all. Hard drives are now 1 TB (ssd) and 4 TB (3.5" hdd). You can also set up raid6 or zfs. So you get tens of terabytes and it's very cheap. You shouldn't bother with binary sizes. In fact there's plenty of room for more functionality in Firefox. Luckily they're working hard at implementing more new features with each release.
                Saying that 5% size does not matter on desktop is a misunderstanding on how computers work.
                While stability is a major concern as Linus points out, smaller size generally benefits everything.
                Loading times, memory contention, cache usage etc.
                Just reducing size while maintaining everything else will generate a speedup.
                If it is measurable in comparison to general code behavior or not, that is another question.

                Comment


                • #18
                  Originally posted by milkylainen View Post
                  Saying that 5% size does not matter on desktop is a misunderstanding on how computers work.
                  While stability is a major concern as Linus points out, smaller size generally benefits everything.
                  Loading times, memory contention, cache usage etc.
                  Just reducing size while maintaining everything else will generate a speedup.
                  If it is measurable in comparison to general code behavior or not, that is another question.
                  I'm just saying that you're wasting storage resources with too small binaries. The manufacturers can't sell new equipment if the software doesn't grow "naturally", that is according to Moore's law. It is important that you spend twice as much disk space and other resources every 18 months while doing the same thing. For example in browsers they already switched to slower JavaScript because NaCl, .NET, and OpenJDK/Sun JVM started to became too fast.

                  Comment


                  • #19
                    It's not lunchtime

                    Originally posted by caligula View Post
                    For example in browsers they already switched to slower JavaScript because NaCl, .NET, and OpenJDK/Sun JVM started to became too fast.
                    Don't feed the trolls.

                    Comment


                    • #20
                      Dear Trovalds, don't make us angry .

                      Comment

                      Working...
                      X