Announcement

Collapse
No announcement yet.

Intel Is Trying To Support The x32 ABI For LLVM/Clang

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel Is Trying To Support The x32 ABI For LLVM/Clang

    Phoronix: Intel Is Trying To Support The x32 ABI For LLVM/Clang

    While adoption of the Linux x32 ABI hasn't really taken off with most developers and end-users doing just fine with x86_64-compiled software, Intel is trying to get things back on track for supporting x32 by LLVM and Clang...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Why? Seriously intel, just drop your 32 bit platforms already. This is getting old.

    Comment


    • #3
      Originally posted by schmidtbag View Post
      Why? Seriously intel, just drop your 32 bit platforms already. This is getting old.
      Please, if you don't know what you're talking about, just don't talk about it... If you had even read the article you would know it's not about 32 bit platforms...

      Comment


      • #4
        Originally posted by mdias View Post
        Please, if you don't know what you're talking about, just don't talk about it... If you had even read the article you would know it's not about 32 bit platforms...
        "The Linux x32 ABI about allowing programs to take advantage of x86_64 features (and being dependent upon x86 64-bit CPUs) while using 32-bit memory pointers."

        To me, this sounds like trying to use 64 bit features on 32 bit architectures. While it doesn't explicitly say this is for 32 bit architectures, I feel that is the intended use in the long run. While I rarely mess with languages like C/C++, I don't see why (if you're on a 64 bit system) to use 32-bit pointers expecting them to use x86-64 features when you could just use 64-bit pointers. If what I'm saying is wrong then sure, it might not have been great to presume the worst, but it still doesn't change the fact that intel has been dragging 32 bit products even into 2013 and I'm getting tired of it.

        Comment


        • #5
          Originally posted by schmidtbag View Post
          "The Linux x32 ABI about allowing programs to take advantage of x86_64 features (and being dependent upon x86 64-bit CPUs) while using 32-bit memory pointers."

          To me, this sounds like trying to use 64 bit features on 32 bit architectures. While it doesn't explicitly say this is for 32 bit architectures, I feel that is the intended use in the long run. While I rarely mess with languages like C/C++, I don't see why (if you're on a 64 bit system) to use 32-bit pointers expecting them to use x86-64 features when you could just use 64-bit pointers. If what I'm saying is wrong then sure, it might not have been great to presume the worst, but it still doesn't change the fact that intel has been dragging 32 bit products even into 2013 and I'm getting tired of it.
          Exactly. I don't see any reason to keep using 32 bit architectures either. Not to mention that if it was not for AMD we would still be using 32-bit CPUs on the desktop and 64-bit only on servers ($$$$).

          Comment


          • #6
            Originally posted by schmidtbag View Post
            I don't see why (if you're on a 64 bit system) to use 32-bit pointers expecting them to use x86-64 features when you could just use 64-bit pointers.
            Since you have 64 bit of bandwith you can transfer two 32 bitspointers at a time which might make stuff faster.

            Comment


            • #7
              Originally posted by schmidtbag View Post
              "The Linux x32 ABI about allowing programs to take advantage of x86_64 features (and being dependent upon x86 64-bit CPUs) while using 32-bit memory pointers."
              Originally posted by schmidtbag View Post
              To me, this sounds like trying to use 64 bit features on 32 bit architectures. While it doesn't explicitly say this is for 32 bit architectures, I feel that is the intended use in the long run. While I rarely mess with languages like C/C++, I don't see why (if you're on a 64 bit system) to use 32-bit pointers expecting them to use x86-64 features when you could just use 64-bit pointers.
              For some cases, the 32bit vs 64bit has some drawbacks.

              Originally posted by schmidtbag View Post
              If what I'm saying is wrong then sure, it might not have been great to presume the worst, but it still doesn't change the fact that intel has been dragging 32 bit products even into 2013 and I'm getting tired of it.
              x32 is not 32bit. You just mentioned this yourself.

              Comment


              • #8
                Originally posted by wargames View Post
                Exactly. I don't see any reason to keep using 32 bit architectures either. .
                But x32 REQUIRES A 64 BIT ARCHITECTURE.

                x32 != x86

                Comment


                • #9
                  Originally posted by doom_Oo7 View Post
                  Since you have 64 bit of bandwith you can transfer two 32 bitspointers at a time which might make stuff faster.
                  Well in that case, this is awesome.

                  Comment


                  • #10
                    Originally posted by schmidtbag View Post
                    Why? Seriously intel, just drop your 32 bit platforms already. This is getting old.
                    x32 :
                    - amd64 with 32bit pointers.
                    - i686 fixed and improved with modern features.

                    For a developer, for example, off_t and other c/c++ defines are 64 bit so you don't have to worry about 4+ GB files and don't have to (remember to) pass "_FILE_OFFSET_BITS=64" to the compiler any longer because all the problems in the x32 bit API have been fixed. I recall like 1-2 years ago Linus telling a Linux dev who submitted x32 patches to the Linux kernel to fix the x32 ABI by setting some defines to 64bit to avoid the problems currently with i686 ( "standard" x86 32 bit).

                    Comment

                    Working...
                    X