Announcement

Collapse
No announcement yet.

LInux 3.4 Kernel Has x32 ABI Support

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • LInux 3.4 Kernel Has x32 ABI Support

    Phoronix: LInux 3.4 Kernel Has x32 ABI Support

    The pull happened last week prior to the Linux 3.4-rc1 release, but one of the other interesting changes in the Linux 3.4 kernel that hasn't been talked about much is the x32 support...

    http://www.phoronix.com/vr.php?view=MTA4MzE

  • #2
    For memory-constrained systems, this could result in significantly better memory efficiency on x86_64 CPUs. In particular, I am interested in whether the x86 Android tablets that are planned (using Intel Atom processors) are going to take advantage of this.

    Basically, you get all the advantages of the x86_64 architecture, but none of the disadvantages. The main disadvantage being, that pointers take up twice as much memory. So for pointer-heavy code (and what code isn't pointer-heavy these days), all those 64-bit integers vs. 32-bit integers start to add up. Especially if your total system memory is in the range of 512 MB - 2 GB, as is the case with mobile devices.

    You could ship x86_64 Atom processors, technically capable of executing the full 64-bit ISA, and use 32-bit pointers to save memory. This sounds like a win-win.

    On the desktop, this will only introduce problems. We already have an x86/x86_64 split on multi-lib Debian and Fedora systems; we use naming conventions like /usr/lib for 32-bit and /usr/lib64 for 64-bit (Red Hat distros), or /usr/lib for 64-bit and /usr/lib32 for 32-bit (Debian distros). Ubuntu is trying to lead the charge with proper multi-lib support by having /usr/lib/OMG-I-AM-AN-AWFUL-AND-REALLY-LONG-ARCHITECTURE-NAME/ directories, which would work perfectly fine on a tri-arch x86, x32, x86_64 system. But that seems really complicated. Three versions of each library? Ugh. You blow all your RAM savings just having all those versions loaded in memory.

    An x32 kernel, in particular, seems like it'll be fraught with problems for the desktop. Presumably you won't be able to load x86_64 kernel modules into an x32 kernel, because of the pointer size difference; you also won't be able to load x86 kernel modules into an x32 kernel, because of the ISA differences. So if you need to depend on any code -- tainted or otherwise -- that isn't designed to compile (in the case of open source) or link (in the case of binaries) with the x32 ISA, you can't use it. You're stuck with the traditional x86 or x86_64.

    So while I see enormous benefit for this on "pure" x32 embedded systems and mobile devices, I think most people with >= 4GB RAM on a desktop or full-fat laptop are probably not going to care about all the extra compatibility work they'll have to do, just to save 50 - 100 megs (optimistically) in memory overhead from x86_64. Oh, and your industrial-sized workstation apps that really do use up > 4GB RAM are still going to run OOM on x32.

    Comment


    • #3
      Article typo, it's "Ingo" Molnar, not "Igno".

      Comment


      • #4
        Originally posted by allquixotic View Post
        So while I see enormous benefit for this on "pure" x32 embedded systems and mobile devices, I think most people with >= 4GB RAM on a desktop or full-fat laptop are probably not going to care about all the extra compatibility work they'll have to do, just to save 50 - 100 megs (optimistically) in memory overhead from x86_64. Oh, and your industrial-sized workstation apps that really do use up > 4GB RAM are still going to run OOM on x32.
        Well, there's also the speed improvements reported (~15% iirc) due to more code fitting in the cpu caches. That said I think you are right that it probably won't make a huge impact on the Linux desktop, personally I find it interesting though and will play around with it.

        Comment


        • #5
          Originally posted by allquixotic View Post
          An x32 kernel, in particular, seems like it'll be fraught with problems for the desktop. Presumably you won't be able to load x86_64 kernel modules into an x32 kernel, because of the pointer size difference; you also won't be able to load x86 kernel modules into an x32 kernel, because of the ISA differences. So if you need to depend on any code -- tainted or otherwise -- that isn't designed to compile (in the case of open source) or link (in the case of binaries) with the x32 ISA, you can't use it. You're stuck with the traditional x86 or x86_64.

          So while I see enormous benefit for this on "pure" x32 embedded systems and mobile devices, I think most people with >= 4GB RAM on a desktop or full-fat laptop are probably not going to care about all the extra compatibility work they'll have to do, just to save 50 - 100 megs (optimistically) in memory overhead from x86_64. Oh, and your industrial-sized workstation apps that really do use up > 4GB RAM are still going to run OOM on x32.
          x32 runs on x86_64 kernels, there is no new architecture on the kernel side.

          Comment


          • #6
            @allquixotic x32 is only in userspace. The Kernel is an "normal" x86-64 Binary.

            Comment


            • #7
              New rpm/deb nightmare incoming, but distros fortunately did have some practice with the x86/x86_64 multiarch support. Adding a third one is nothing fundamentally new.

              However I'd treat this differently and provide both x32 and x64 libraries in a _single_ package (it will cost some disk space, but seriously few GB's is nothing these days). This is not possible with x86 because the separate package is required, which is also part of a x86 distro (because these are reused by x64 distro builds to add x86 support), however x32 has 1:1 hw mapping with x64 and not even a new distro build is needed.

              If we use the mentioned approach, presence of x32 shared libs is guaranteed for the whole system, and distributions could then choose to build some applications as x32, because in MOST cases there is absolutely no reason to use x64 over x32 (read: Firefox could grow up to 5 gigs, but the toolbar applet not)

              Comment


              • #8
                I think most people could use a full x32 system because I don’t know of any app that requires more than 1 GB of RAM… No problem then.

                Comment


                • #9
                  Originally posted by stqn View Post
                  [...] most people could [...] because I [...]
                  Solid reasoning.

                  Comment


                  • #10
                    Thanks for approving!

                    Comment


                    • #11
                      Originally posted by stqn View Post
                      Thanks for approving!
                      He was being sarcastic, dummy And just to prove how ignorant you are, here's a tiny list of legitimate applications/app categories that easily can use more virtual address space than can be mapped with 32-bit pointers:
                      • All digital audio workstations (e.g. Ardour, Audacity)
                      • GIMP (very easy to do if editing digital camera images in their original resolution)
                      • LibreOffice/OpenOffice (big databases, huge documents, large database driver caches, etc)
                      • Simulations, e.g. OpenSimulator (server-side), Second Life client (client-side)
                      • Application servers, e.g. JBoss, Glassfish, Tomcat
                      • A non-modular browser with a zillion tabs open (okay, less likely than some of the above)
                      • Video transcoding / capture software, or mostly anything that deals with video, esp. real-time
                      • The LZMA/LZMA2/PPMd compression algorithms, in compression mode, with maximum/ultra compression quality
                      • Most any scientific computing application (if it uses OpenCL and pegs your entire CPU, that's a good hint)

                      Out of this list, "normal users" would probably only ever use the first three on the list, and maaaaybe video transcoding, but still... whether or not these apps actually run OOM is irrelevant. The point is, their performance could drastically be improved on a system with more than 4GB of RAM if it can use all that address space. Many applications are cleverly coded so that they will sacrifice CPU time or even store temporary results to disk, in order to avoid using up all the virtual address space. But, while this clever coding makes the application stable on a 32-bit system, it comes with a huge performance hit. For example, writing to disk is 20 times slower (on average; SSDs notwithstanding) than writing to RAM. If you could just use a little more RAM, your app would run 20x faster!

                      I have a desktop with 16 GB of RAM, and it wasn't that expensive. I can now afford buying 32 GB of RAM, but I'd need to get a new motherboard to slot it There are even laptops these days that can go between 8 and 24 GB of RAM depending on how large of a laptop you want. RAM is cheap and extremely useful; not being able to use it is a sad waste. Set your memory-hungry apps free!

                      The above reasons are why I think 32-bit is just not going to have a lasting future for the general purpose desktop, because desktops are where all the high-capacity RAM is being bought, and (increasingly) used. That said, however, not even the most insane user of a smartphone or tablet is going to try and use that much memory, so 32-bit is fine for ARM devices that, at the extreme high-end, only have 2GB of RAM today. ARM is at least a decade away (maybe more like 15 years) from even needing 64-bit for any applications on Android or similar. For x86_64, the need for 64-bit is real, and it's here today.

                      Comment


                      • #12
                        I was sure someone would come up with a useless list of applications using more than 1 GB of ram... I was obviously talking about “normal” use of a computer, not professionnal and specific work.

                        Video transcoding doesn’t take more than a gig here, definitely not more than 2. Nor does compiling “normal” programs, using Firefox with 50 tabs or Audacity. My parents use OpenOffice on a computer with 1 GB RAM and it doesn’t need 3 GB of swap, no. No sane person would use xz at the maximum compression setting; it’s already too slow at the default with little benefit over gzip.

                        Buying something and creating waste because it’s “cheap” is stupid and saying that everyone can/should do the same even more so.

                        Comment


                        • #13
                          Funny that you mention Firefox. Mozilla has run repeatedly into address space limits when using profile guided optimization during Firefox build. As Web 2.0 applications become increasingly complex, my guess is that the browser will be the next application to require 64 bit (after image and video editors).

                          Regarding gzip vs. xz, the difference can be staggering.

                          Comment


                          • #14
                            Originally posted by stqn View Post
                            I was sure someone would come up with a useless list of applications using more than 1 GB of ram... I was obviously talking about “normal” use of a computer, not professionnal and specific work.

                            Video transcoding doesn’t take more than a gig here, definitely not more than 2. Nor does compiling “normal” programs, using Firefox with 50 tabs or Audacity. My parents use OpenOffice on a computer with 1 GB RAM and it doesn’t need 3 GB of swap, no. No sane person would use xz at the maximum compression setting; it’s already too slow at the default with little benefit over gzip.

                            Buying something and creating waste because it’s “cheap” is stupid and saying that everyone can/should do the same even more so.
                            Talking only about extremely basic use cases that your grandparents would use because they're "normal" is stupid and saying that everyone can/should do the same even more so.

                            Comment


                            • #15
                              Originally posted by stqn View Post
                              No sane person would use xz at the maximum compression setting; it’s already too slow at the default with little benefit over gzip.
                              Everytime I compress something, I use 7z at maximum compression...

                              Creating waste because it's cheap is stupid indeed. Assuming everyone makes the same use than you, is stupid too.

                              Comment

                              Working...
                              X