Announcement

Collapse
No announcement yet.

Ubuntu Plans For Linux x32 ABI Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Hirager
    replied
    Originally posted by rohcQaH View Post
    IIRC Firefox is by default compiled with -Os because the smaller cache footprint outweights all the other optimizations. But that's something you'll have to test for each project separately.


    The linked ubuntu docs seem to be hidden behind a login. Is there a solution for the library redundancy? Having to load x32 kdelibs+Qt AND x86_64 kdelibs+Qt for that one KDE-App that benefits from >4GB memory would probably outweight any memory savings to be had.
    No offence meant, but I would rather hear the answer from someone who specializes in this sort of things.

    As to your question. You forget just how big multimedia projects can. It is not about memory savings for big programs. It is about savings achieved in workflows which do not require 64-bit software. 64-bit programs are treated here like an additions and nothing more. So this is a back to the past situation, because it turned out that the drawbacks of 64-bit software can be nullified.

    Leave a comment:


  • rohcQaH
    replied
    Originally posted by Hirager View Post
    I am just curious. Do -O3 optimizations make binaries "eat" more L2 than -O2 optimizations? Consider everything else left the same in comparison.
    IIRC Firefox is by default compiled with -Os because the smaller cache footprint outweights all the other optimizations. But that's something you'll have to test for each project separately.


    The linked ubuntu docs seem to be hidden behind a login. Is there a solution for the library redundancy? Having to load x32 kdelibs+Qt AND x86_64 kdelibs+Qt for that one KDE-App that benefits from >4GB memory would probably outweight any memory savings to be had.

    Leave a comment:


  • Hirager
    replied
    Originally posted by xir_ View Post
    It can allow more to fit into the L2.
    I am just curious. Do -O3 optimizations make binaries "eat" more L2 than -O2 optimizations? Consider everything else left the same in comparison.

    Leave a comment:


  • XorEaxEax
    replied
    Originally posted by LinuxID10T View Post
    A lot of it has to do with how much a program uses 64 bit variables. In photo/video applications and many calculations, this is a lot. Therefore 64 bit does well on multimedia and scientific benchmarks yet does nothing oftentimes on others.
    Well not exactly, the registers being 64-bit instead of 32-bit does help alot when you are dealing with 64-bit data of course, however even code which doesn't manipulate 64-bit data will greatly benefit from performance increase in 64-bit versus 32-bit because not only are the 64-registers twice as big, they are also twice as 'many'.

    And given that cpu registers is where all the data manipulation takes place, having more of these has a great impact on performance, particularly on a register-starved architecture like x86.

    x32 offers all the registers of x64, while not suffering from the cache eating size of 64-bit pointers, which makes the code smaller and thus potentially quite a bit faster than on x64.

    Leave a comment:


  • LinuxID10T
    replied
    Originally posted by XorEaxEax View Post
    It's interesting to see Ubuntu of all distros potentially being on the 'forefront' of implementing x32, I wonder if Arch Linux (my distro of choice) will officially support x32 sometime in the future.


    Yes, but the x32 binaries have apparently shown in benchmarks that they can 'smoke' the x64 versions. Also x32 binaries will have a smaller footprint / use less RAM, even less than 32-bit code I'd wager given that the extra registers (twice as many) in x32 will mean much less code to push pop data from stack compared to 32-bit. In short, if you do not need a program to address more than 4gb then x32 is nothing but an improvement. Of course there's nothing preventing you from using both x32 and x64 programs in the same system, although you will then need to have both x32 and x64 sets of libraries. One option would perhaps be to run everything as x32 and then have any applications where you need more than 4gb to be statically compiled with the required x64 libraries?

    I have a 4gb system and an 8gb system, and I use Gimp, Blender, Inkscape, very much on both and I haven't personally had any memory shortage problems on the 4gb system. However when it comes to Blender in particular 4gb could quickly become an unacceptable limit for large projects.

    edit: also, what is 'a larger register file' which Micheal mentioned in the article?
    A lot of it has to do with how much a program uses 64 bit variables. In photo/video applications and many calculations, this is a lot. Therefore 64 bit does well on multimedia and scientific benchmarks yet does nothing oftentimes on others.

    Leave a comment:


  • XorEaxEax
    replied
    It's interesting to see Ubuntu of all distros potentially being on the 'forefront' of implementing x32, I wonder if Arch Linux (my distro of choice) will officially support x32 sometime in the future.

    Originally posted by LinuxID10T View Post
    See, in a day where all 64 bit computers have plenty of memory, why do we want this? Seriously, in most programs created for 64 bit, the 64 bit version smokes the x86 and x32 version.
    Yes, but the x32 binaries have apparently shown in benchmarks that they can 'smoke' the x64 versions. Also x32 binaries will have a smaller footprint / use less RAM, even less than 32-bit code I'd wager given that the extra registers (twice as many) in x32 will mean much less code to push pop data from stack compared to 32-bit. In short, if you do not need a program to address more than 4gb then x32 is nothing but an improvement. Of course there's nothing preventing you from using both x32 and x64 programs in the same system, although you will then need to have both x32 and x64 sets of libraries. One option would perhaps be to run everything as x32 and then have any applications where you need more than 4gb to be statically compiled with the required x64 libraries?

    I have a 4gb system and an 8gb system, and I use Gimp, Blender, Inkscape, very much on both and I haven't personally had any memory shortage problems on the 4gb system. However when it comes to Blender in particular 4gb could quickly become an unacceptable limit for large projects.

    edit: also, what is 'a larger register file' which Micheal mentioned in the article?
    Last edited by XorEaxEax; 13 May 2012, 03:22 AM.

    Leave a comment:


  • xir_
    replied
    Originally posted by LinuxID10T View Post
    See, in a day where all 64 bit computers have plenty of memory, why do we want this? Seriously, in most programs created for 64 bit, the 64 bit version smokes the x86 and x32 version.

    It can allow more to fit into the L2.

    Leave a comment:


  • LinuxID10T
    replied
    See, in a day where all 64 bit computers have plenty of memory, why do we want this? Seriously, in most programs created for 64 bit, the 64 bit version smokes the x86 and x32 version.

    Leave a comment:


  • Linuxhippy
    replied
    great news =)

    This is actually really great news =)

    Leave a comment:


  • GreatEmerald
    replied
    Originally posted by Koorac View Post
    x32 for the kernel does not exist, this is just for userspace. You need a x86_64 kernel for it.
    There will not be much need for x86_64 programs (except for e.g. big databases) since most applications are fine with < 4 GB RAM. It may be useful for mmap()ing large files though.
    Depends on what you call "most"... Kdenlive (melt) can eat all your RAM for breakfast. I'd assume it could be something similar with Blender and GIMP as well. And these are fairly common programs.

    Leave a comment:

Working...
X