Announcement

Collapse
No announcement yet.

Ubuntu Plans For Linux x32 ABI Support

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Ubuntu Plans For Linux x32 ABI Support

    Phoronix: Ubuntu Plans For Linux x32 ABI Support

    With the x32 ABI for Linux finally coming together, Ubuntu developers are making plans to support this interesting ABI in the future...

    http://www.phoronix.com/vr.php?view=MTEwMTk

  • #2
    They should start working out the kinks with this arch ASAP so that they can provide it on x86_64 devices with 2 gigs of RAM and under ASAP. Don't want to know how many headers and non-portable codes assume that 32-bit pointers == x86 and 64-bit pointers == x86_64

    Here's how I envision it working:

    1. Computers with 4GB of RAM or more use native 64-bit arch because then you don't have to use PAE, which slows you down and defeats the purpose of using x32 (plus >= 4GB of RAM, the memory savings of x32 aren't going to be significant). These boxes will only have a native 64-bit and old x86 compatibility environment installed with no x32 support. Because if you try to support x86_64 AND x32 AND x86, you end up with so many different versions of libs loaded into memory that you defeat the purpose of trying to save memory by using x32.

    2. Computers with (much) less than 4GB of RAM but with an x86_64 processor use x32 for the kernel and all the distro packages as much as possible. For third party apps you download from the web you would grab the x86 (32-bit) version and have an x86 compatibility environment installed, much in the tradition of x86_64. If an x32 web browser isn't compatible with an x86 plugin, you'd have to install an x86 browser. Sucks though because then you have x32 libs loaded for your desktop and x86 libs loaded for your browser, goodbye memory savings... hmm... will have to look into whether x86 Flash can work on an x32 browser without loading the entire x86 world into memory...

    3. Computers that don't have an x86_64 capable processor would be stuck with the old x86 instruction set as before.

    Comment


    • #3
      Originally posted by allquixotic View Post
      2. Computers with (much) less than 4GB of RAM but with an x86_64 processor use x32 for the kernel and all the distro packages as much as possible. For third party apps you download from the web you would grab the x86 (32-bit) version and have an x86 compatibility environment installed, much in the tradition of x86_64. If an x32 web browser isn't compatible with an x86 plugin, you'd have to install an x86 browser. Sucks though because then you have x32 libs loaded for your desktop and x86 libs loaded for your browser, goodbye memory savings... hmm... will have to look into whether x86 Flash can work on an x32 browser without loading the entire x86 world into memory...
      x32 for the kernel does not exist, this is just for userspace. You need a x86_64 kernel for it.
      There will not be much need for x86_64 programs (except for e.g. big databases) since most applications are fine with < 4 GB RAM. It may be useful for mmap()ing large files though.

      Comment


      • #4
        Originally posted by Koorac View Post
        x32 for the kernel does not exist, this is just for userspace. You need a x86_64 kernel for it.
        There will not be much need for x86_64 programs (except for e.g. big databases) since most applications are fine with < 4 GB RAM. It may be useful for mmap()ing large files though.
        Depends on what you call "most"... Kdenlive (melt) can eat all your RAM for breakfast. I'd assume it could be something similar with Blender and GIMP as well. And these are fairly common programs.

        Comment


        • #5
          great news =)

          This is actually really great news =)

          Comment


          • #6
            See, in a day where all 64 bit computers have plenty of memory, why do we want this? Seriously, in most programs created for 64 bit, the 64 bit version smokes the x86 and x32 version.

            Comment


            • #7
              Originally posted by LinuxID10T View Post
              See, in a day where all 64 bit computers have plenty of memory, why do we want this? Seriously, in most programs created for 64 bit, the 64 bit version smokes the x86 and x32 version.

              It can allow more to fit into the L2.

              Comment


              • #8
                It's interesting to see Ubuntu of all distros potentially being on the 'forefront' of implementing x32, I wonder if Arch Linux (my distro of choice) will officially support x32 sometime in the future.

                Originally posted by LinuxID10T View Post
                See, in a day where all 64 bit computers have plenty of memory, why do we want this? Seriously, in most programs created for 64 bit, the 64 bit version smokes the x86 and x32 version.
                Yes, but the x32 binaries have apparently shown in benchmarks that they can 'smoke' the x64 versions. Also x32 binaries will have a smaller footprint / use less RAM, even less than 32-bit code I'd wager given that the extra registers (twice as many) in x32 will mean much less code to push pop data from stack compared to 32-bit. In short, if you do not need a program to address more than 4gb then x32 is nothing but an improvement. Of course there's nothing preventing you from using both x32 and x64 programs in the same system, although you will then need to have both x32 and x64 sets of libraries. One option would perhaps be to run everything as x32 and then have any applications where you need more than 4gb to be statically compiled with the required x64 libraries?

                I have a 4gb system and an 8gb system, and I use Gimp, Blender, Inkscape, very much on both and I haven't personally had any memory shortage problems on the 4gb system. However when it comes to Blender in particular 4gb could quickly become an unacceptable limit for large projects.

                edit: also, what is 'a larger register file' which Micheal mentioned in the article?
                Last edited by XorEaxEax; 05-13-2012, 03:22 AM.

                Comment


                • #9
                  Originally posted by XorEaxEax View Post
                  It's interesting to see Ubuntu of all distros potentially being on the 'forefront' of implementing x32, I wonder if Arch Linux (my distro of choice) will officially support x32 sometime in the future.


                  Yes, but the x32 binaries have apparently shown in benchmarks that they can 'smoke' the x64 versions. Also x32 binaries will have a smaller footprint / use less RAM, even less than 32-bit code I'd wager given that the extra registers (twice as many) in x32 will mean much less code to push pop data from stack compared to 32-bit. In short, if you do not need a program to address more than 4gb then x32 is nothing but an improvement. Of course there's nothing preventing you from using both x32 and x64 programs in the same system, although you will then need to have both x32 and x64 sets of libraries. One option would perhaps be to run everything as x32 and then have any applications where you need more than 4gb to be statically compiled with the required x64 libraries?

                  I have a 4gb system and an 8gb system, and I use Gimp, Blender, Inkscape, very much on both and I haven't personally had any memory shortage problems on the 4gb system. However when it comes to Blender in particular 4gb could quickly become an unacceptable limit for large projects.

                  edit: also, what is 'a larger register file' which Micheal mentioned in the article?
                  A lot of it has to do with how much a program uses 64 bit variables. In photo/video applications and many calculations, this is a lot. Therefore 64 bit does well on multimedia and scientific benchmarks yet does nothing oftentimes on others.

                  Comment


                  • #10
                    Originally posted by LinuxID10T View Post
                    A lot of it has to do with how much a program uses 64 bit variables. In photo/video applications and many calculations, this is a lot. Therefore 64 bit does well on multimedia and scientific benchmarks yet does nothing oftentimes on others.
                    Well not exactly, the registers being 64-bit instead of 32-bit does help alot when you are dealing with 64-bit data of course, however even code which doesn't manipulate 64-bit data will greatly benefit from performance increase in 64-bit versus 32-bit because not only are the 64-registers twice as big, they are also twice as 'many'.

                    And given that cpu registers is where all the data manipulation takes place, having more of these has a great impact on performance, particularly on a register-starved architecture like x86.

                    x32 offers all the registers of x64, while not suffering from the cache eating size of 64-bit pointers, which makes the code smaller and thus potentially quite a bit faster than on x64.

                    Comment


                    • #11
                      Originally posted by xir_ View Post
                      It can allow more to fit into the L2.
                      I am just curious. Do -O3 optimizations make binaries "eat" more L2 than -O2 optimizations? Consider everything else left the same in comparison.

                      Comment


                      • #12
                        Originally posted by Hirager View Post
                        I am just curious. Do -O3 optimizations make binaries "eat" more L2 than -O2 optimizations? Consider everything else left the same in comparison.
                        IIRC Firefox is by default compiled with -Os because the smaller cache footprint outweights all the other optimizations. But that's something you'll have to test for each project separately.


                        The linked ubuntu docs seem to be hidden behind a login. Is there a solution for the library redundancy? Having to load x32 kdelibs+Qt AND x86_64 kdelibs+Qt for that one KDE-App that benefits from >4GB memory would probably outweight any memory savings to be had.

                        Comment


                        • #13
                          Originally posted by rohcQaH View Post
                          IIRC Firefox is by default compiled with -Os because the smaller cache footprint outweights all the other optimizations. But that's something you'll have to test for each project separately.


                          The linked ubuntu docs seem to be hidden behind a login. Is there a solution for the library redundancy? Having to load x32 kdelibs+Qt AND x86_64 kdelibs+Qt for that one KDE-App that benefits from >4GB memory would probably outweight any memory savings to be had.
                          No offence meant, but I would rather hear the answer from someone who specializes in this sort of things.

                          As to your question. You forget just how big multimedia projects can. It is not about memory savings for big programs. It is about savings achieved in workflows which do not require 64-bit software. 64-bit programs are treated here like an additions and nothing more. So this is a back to the past situation, because it turned out that the drawbacks of 64-bit software can be nullified.

                          Comment


                          • #14
                            Will there be a benefit for WINE?

                            Comment


                            • #15
                              Originally posted by Hirager View Post
                              I am just curious. Do -O3 optimizations make binaries "eat" more L2 than -O2 optimizations? Consider everything else left the same in comparison.
                              Well since -O3 favours speed over code size it is likely to be bigger than -O2 and thus fill up cpu cache faster. However since the optimizer aims for fastest speed it will only make code larger when the added cache footprint (like through inlining etc) will not make performance worse.

                              In reality though the heuristics governing this are very difficult to get right and this is why sometimes the same code compiled using -O2 will beat -O3. I've never encountered this is with PGO (profile guided optimization) though, which means that the runtime data it uses for making choices when optimizing allows it to accurately value the impact code size/cache misses will have on performance.

                              Comment

                              Working...
                              X