Announcement

Collapse
No announcement yet.

Linux x32 ABI Interest Faded In 2013

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by energyman View Post
    x32 was 'invented' by Intel because ATOM sucks with 64bit pointers. That is all. They build a suckfest of a CPU and to save the day they splintered the x86 market even more.

    Nice move intel, you idiots.
    Nice display of ignorance.
    If you're unable explain technically why it's not worth and then please keep your opinions to yourself.
    I have absolutely no financial ties with Intel, and to some extent I'm not even sympathetic to Intel (I'd much rather have ARM become a strong competitive force that makes Intel a lot more humble, hence competitive that the utter dominance it has today).
    However I do have 30 years of experience with computers and have in depth expertise in all performance aspects of computing (Networking, disk I/O and cpu/ram/bus).
    For some people 10% performance gain is worth it the trouble (possible with x32), for some people not even 30% performance gain might not be worth the trouble (impossible with x32).
    Realize that some of the strongest proponents of x32 is Google, and they have tens of thousands of servers (I believe none with Intel ATOM processors) and still they're pushing this pretty hard.
    Are you going to say you're smarter than they guys at Google ? I know I'm don't. And from your words it doesn't look like you know 1% of what I know about performance.
    Get a grip.

    Comment


    • #12
      Originally posted by jokeyrhyme View Post
      Does anyone know if the strategy for x32 (use 64-bit features within 32-bit limits) has any value outside of x86? I'm specifically curious about the recently released 64-bit ARM ISA and its emulation of the original 32-bit ISA.

      The primary advantage for x32 (unless I'm mistaken) was lower RAM usage. This would seem to be especially important for the embedded devices that typify ARM usage. And even more important for garbage-collected environments like Android.
      There's a defined 32-bit API for Aarch64 and it's supported by binutils, gcc and Linux kernel (at least patches have been published).

      IIRC several AMD SPEC2000 submissions were forcing 32-bit codegen for some of the test to reduce data cache foot print and hence get better results. So x32 should help even more I guess.

      Comment


      • #13
        so with typical cahce hit rates over 95% - how much of an improvement is acutally there?

        and '30 years of experience' just says:
        I am old. The stuff I learned and still love is old.

        not a good basis for your arguments.

        btw:
        There has been many comments on my previous post about the new x32 ABI; some are interesting, others…

        Comment


        • #14
          Originally posted by energyman View Post
          so with typical cahce hit rates over 95% - how much of an improvement is acutally there?

          and '30 years of experience' just says:
          I am old. The stuff I learned and still love is old.

          not a good basis for your arguments.

          btw:
          https://blog.flameeyes.eu/2012/06/debunking-x32-myths
          In your link, at bottom of comments,
          macpacheco = Marcelo Pacheco


          To me, x32 remains a dead case, because
          - it is for closed systems, and
          - its an undead hack, we already had.
          Remember the 286 with its 16 MiBs of RAM, via the segment/offset hack? This is pretty pretty similar.
          The outcome? Do we use 8086 instruction within 386 instruction set, within amd64 instruction set, within ...? Same old story.
          Minor performance salted with near impossible bugs, legacy support complicating life, legacy deviating the future designs, red flag to newer technologies just because they would break the old legacy code (like ASR outside of 4G impossible within x32), global time waste? Gaining 20% for wasting 2000% of human life. This already happened in the past and the outcome was - it was stupid. Move to next architecture and wipe the old cruft for good. It had its days.

          Comment


          • #15
            Originally posted by brosis View Post
            In your link, at bottom of comments,
            macpacheco = Marcelo Pacheco


            To me, x32 remains a dead case, because
            - it is for closed systems, and
            - its an undead hack, we already had.
            Remember the 286 with its 16 MiBs of RAM, via the segment/offset hack? This is pretty pretty similar.
            The outcome? Do we use 8086 instruction within 386 instruction set, within amd64 instruction set, within ...? Same old story.
            Minor performance salted with near impossible bugs, legacy support complicating life, legacy deviating the future designs, red flag to newer technologies just because they would break the old legacy code (like ASR outside of 4G impossible within x32), global time waste? Gaining 20% for wasting 2000% of human life. This already happened in the past and the outcome was - it was stupid. Move to next architecture and wipe the old cruft for good. It had its days.
            x32 is a plain linear model, pure C/C++ core can be recompiled in many cases with no changes at all. And if changes are needed they are tiny (assuming the code is already compatible with both 32-bit / x86_64, essentially it's the 32-bit #ifdef with 64 timestamps, everything else is the same 32bit code).
            I'm very aware it's a difficult, maybe even lost cause vs just using only x86_64. But it's a lot cause because people decided it's not worth, not because it's difficult to do at all ! It's 99.9% about people wanting to do it, rather than oh it's going to take days of work per package, it's more like minutes of work per package (assuming no __asm code blocks or .S files involved).
            But I don't agree with the it's a hack statement. There's no segmentation involved.

            I remember running Microsoft C compiler from floppy disks circa 1990. We had small (16/16bit), medium (16 bit code pointers, segmented data pointers, code limited to 64KB, data could use all memory), large models (segmented data and code pointers), that was a hack... Took a few minutes to compile a plain hello world program. medium model had data pointers different size of code pointers, very ugly. And memory segments, that was a PITA.

            Comment


            • #16
              Originally posted by kertoxol View Post
              The right ABI for netbooks, arrived too late.
              It is really the wrong idea for just about anything. Once people associated with the idea realize that the code supporting X32 will,slowly fad away.

              Comment


              • #17
                Originally posted by brad0 View Post
                The netbooks where this mattered were/are using 32-bit processors so you can't use said ABI on those systems. Systems where there are 64-bit processors this isn't an issue. Its work to create an ABI for no real world use and then to have to maintain the compiler/toolchain for something very few people will use.
                This is the case exactly. There is simply no real world benefit that justifies the effort required to support the concept.

                Comment


                • #18
                  Why would you waste your time doing this? It helps no one and only complicates the distro.

                  Originally posted by s_j_newbury View Post
                  At least on the Gentoo front, a major roadblock is getting complete multi-abi coverage so that it's possible to emerge a system with x32, then use LP64 ABI as appropriate for a given application, or of course x86 where there is only a x86 package availability, with all dependencies resolved automatically.

                  We were pretty much there with the previous multilib-portage effort, but since that wasn't adopted upstream there's been quite a lot of work modifying ebuilds to integrate support, which is what I've been working on for the last month or so. Since I now have Steam working without any emul-linux-x86 binaries, I'm going to start pushing my changes out to ebuild maintainers, so hopefully this will be landing soon. Once that's done I'm going to have another attempt at bringing up x32 as a fully supported ABI, I probably need to have another look at llvm/clang...

                  Comment


                  • #19
                    Well if counting years in the field adds to credibility, I've been around this stuff since the late seventies. By the way that should have zero impact on credibility.

                    Originally posted by macpacheco View Post
                    There's no magic about how x32 works exactly.
                    The only way that x32 could be slower than regular 32bit or x86_64 is some system calls that require conversion of the pointers passed.
                    Honestly I don't care if it is faster or slower, it really isn't a point of value here. It is the idea of supporting yet another ABI for nothing of value.
                    Plus this set of pure 32bit, 64bit data/32bit pointer and pure 64bit isn't anything new.
                    When it's all said and done, you must get some performance improvement, except for some very rare cases that uses almost no pointers.
                    You make an assumption that pointers are a huge issue, they aren't. Sure you can look for and find examples of software that is cluttered with pointers but not all code is so encumbered.
                    You don't need benchmarks to show that in average there will be 5-10% performance gains.
                    Sure you do! Are we suppose to take your word for it?
                    I see way too many people criticize x32, but the way they word their messages, it looks like they don't really understand every (positive) impact x32 will have on performance.
                    You have done nothing yourself to indicate a justification for supporting yet another ABI.
                    They try to frame x32 as only useful for embedded systems where RAM might be limited, showing they don't really understand (and don't care to understand it correctly).
                    People forget that the main issue isn't saving RAM, but improving cache hits by reducing the size of data structures, reducing per function stack usage (also to improve cache hits), and reduce RAM bandwidth usage.
                    Supporting another ABI doesn't help one bit as now you have libs competing for that RAM and cache space. It really doesn't matter what the platform is, supporting more than one ABI leads to much wasted memory space often dramatically offsetting any supposed performance gains in real world usage.
                    If this x32 thing isn't so great, then please get just one hardcore linux kernel developer or some other developer known to care about performance to say x32 isn't a big deal, please.
                    Haven't talked to a kernel developer in years. However how many times have these guys been wrong in the past. A rational goal for anybody doing a 64 bit OS is to make sure all of the software running on that machine is 64 bit. That is your number one goal to assure performance is not compromised by excessive memory usage.
                    I'm not a kernel developer, but I'm a performance person that started using computers back in the 8 bit times, and learned assembly before C, while (I'm almost sure) most people saying: "this isn't a big deal, prove me it's useful" are much younger guys that are used to be lazy about performance. Sorry about the prejudice, I would like people give their technical credentials before criticizing.
                    Why should anyone do that when responding to the nonsense you have just posted?
                    I have 30 years of computer experience, how much do you have ?
                    Depending on your experience you just know what will work, what won't, and what needs testing. x32 is one of those cases that I know it will work (be worth it). And I'm yet to see a LOGICAL explanation why it might not work (not be worth it).
                    You have offered joint rational to support your point of view so it is pretty pathetic to demand that others offer up what you want to call logical explanations. In simple terms X32 wastes RAM and by doing so screws performance that it supposedly recovers. The concept is the modern work of the snake oil salesman of prior centuries.

                    Comment


                    • #20
                      Originally posted by macpacheco View Post
                      I remember running Microsoft C compiler from floppy disks circa 1990. We had small (16/16bit), medium (16 bit code pointers, segmented data pointers, code limited to 64KB, data could use all memory), large models (segmented data and code pointers), that was a hack... Took a few minutes to compile a plain hello world program. medium model had data pointers different size of code pointers, very ugly. And memory segments, that was a PITA.


                      Originally posted by macpacheco View Post
                      x32 is a plain linear model, pure C/C++ core can be recompiled in many cases with no changes at all. And if changes are needed they are tiny (assuming the code is already compatible with both 32-bit / x86_64, essentially it's the 32-bit #ifdef with 64 timestamps, everything else is the same 32bit code).
                      I'm very aware it's a difficult, maybe even lost cause vs just using only x86_64. But it's a lot cause because people decided it's not worth, not because it's difficult to do at all ! It's 99.9% about people wanting to do it, rather than oh it's going to take days of work per package, it's more like minutes of work per package (assuming no __asm code blocks or .S files involved).
                      But I don't agree with the it's a hack statement. There's no segmentation involved.
                      My problem is with assumption most software doesn't use 64bit(63bits). Once ASR is in place, x32 will interfere with it and reduce the window to 4GiB, no? So, for embedded, sure; but aren't they okay with 32bit anyway? 64bit timestamps are another issue. And more will come, matter of time. I sense its similar hack to offset:segment with 286, instead of just moving to larger register globally. Integer over follows have a funny effect of receiving 2 billions of dollars for no apparent reason , I can imagine an embedded terminal implemented. Not that I am going to hold you off, breaking things is always fun.

                      Comment

                      Working...
                      X