Announcement

Collapse
No announcement yet.

Linux Kernel Developers Discuss Dropping x32 Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by Weasel View Post
    No, what's faster is the future, not slower. We don't progress forward by catering to lazy pieces of shit.

    No. And FYI I'm a developer. If you can't write code that can transparently deal with both, you just suck.
    Looks like I triggered the local autist troll. Nowhere did I say that I couldn't do it, just that with all the extra work that it entails, no developer (in 2018) SHOULD HAVE TO DO IT. Do you write cross-platform code? Do you have any idea how complicated it can be to make sure everything works the same way everywhere? How many things have to be tested each time you introduce another variable into the equation? One has to test 32 and 64-bit in Windows, 64-bit only (thankfully) in OSX, and 32 and 64-bit in Linux (and various Linux versions, to boot). Now we have YET ANOTHER alternative to worry about.

    It's not about ability of the developer to do it. It's about WHY SHOULD THEY NEED TO??? 64-bit is what is the standard, and probably will be for quite some time. Why go backwards? I remember this exact same shit being argued when 32-bit was a new thing. You had to drag some 16-bit developers kicking and screaming into the 32-bit era. Some people are just really resistant to change in any form.

    EDIT: And I'm not advocating for Intel 64-bit systems exclusively. I'm advocating for 64-bit everywhere instead of mixing in 32-bit. I recognize that 32-bit is still in use in many places, and it will take years to fully port it all. But we should be moving forward and deprecating 32-bit support, not inventing new ways to keep it around.

    From that POV, also sounds exactly like the IPv4 vs. IPv6 debate. Fear of change.
    Last edited by sa666666; 12 December 2018, 08:32 AM.

    Comment


    • #52
      Originally posted by sa666666 View Post
      Looks like I triggered the local autist troll. Nowhere did I say that I couldn't do it, just that with all the extra work that it entails, no developer (in 2018) SHOULD HAVE TO DO IT. Do you write cross-platform code? Do you have any idea how complicated it can be to make sure everything works the same way everywhere? How many things have to be tested each time you introduce another variable into the equation? One has to test 32 and 64-bit in Windows, 64-bit only (thankfully) in OSX, and 32 and 64-bit in Linux (and various Linux versions, to boot). Now we have YET ANOTHER alternative to worry about.

      It's not about ability of the developer to do it. It's about WHY SHOULD THEY NEED TO??? 64-bit is what is the standard, and probably will be for quite some time. Why go backwards? I remember this exact same shit being argued when 32-bit was a new thing. You had to drag some 16-bit developers kicking and screaming into the 32-bit era. Some people are just really resistant to change in any form.

      EDIT: And I'm not advocating for Intel 64-bit systems exclusively. I'm advocating for 64-bit everywhere instead of mixing in 32-bit. I recognize that 32-bit is still in use in many places, and it will take years to fully port it all. But we should be moving forward and deprecating 32-bit support, not inventing new ways to keep it around.

      From that POV, also sounds exactly like the IPv4 vs. IPv6 debate. Fear of change.
      It's not fear of change. It's using the right tool for the job. You are aware modern 8-bit processors are still sold and used today? https://uk.farnell.com/microchip/atm...73390000126318

      The future isn't a march to ever more "bit"-ness, the move to 32 bit was necessitated like the move to 16 bit, and like the move to 64 bit, by ever larger addressing requirements. If the benefits of 64 bit was as clear-cut as you imply we would all be using DEC Alpha CPUs now. The Alpha failed because it couldn't make a compelling case for the advantages it provided at the time. It was fantastic at computation requiring 64 bit integer arithmetic, and it saw use in specialised systems that could take advantage of it, but there was no mass market application. Even the Windows NT Microsoft released for it only ran, much like x32 does on Linux as a 32 bit OS on 64 bit hardware. Why would you buy an Alpha when you could buy a Pentium at a fraction of the cost?

      There is always a trade-off and to my mind there there has never a case been made for general systems software requiring 64 bit longs and pointers (LP64); but there are classes of software where it is a huge win, for example, virtualisation. 64 bit is probably an ideal internal width for a general purpose CPU, it's why DEC went 64 bit with the Alpha, but there are costs, and if you can avoid incurring them for your OS why wouldn't you?

      Comment


      • #53
        I understand having another set of libraries around is a PITA, and you usually have to build them yourself (eg. debian only has the libc libraries in x32 flavor in the standard repos).
        But given that regular 32bit is and still will be supported in the kernel for a long time, I wonder how much extra work this really is. Compared to the horrible hacks in i386, x32 is limited to different pointer sizes which need to be handled anyway for i386 compatibility.

        So a concept of multi-arch needs to prevail anyway.
        Some Archs like MIPS and RISCV are alot cleaner, in that there is no "mode switch" necessary between encodings, wider bit operations are just additional operations. There it should be possibly to support lower "bitness" with a single set of libraries for a big range of functions (anything not depending on exporting structures with pointers).

        x32 would make sense for embedded (or anywhere you dont need more than 4GB processspace), but the lack of support is keeping alot people from using that.

        Comment


        • #54
          Originally posted by sa666666 View Post
          Why do some people constantly cling to the old? 32-bit is dying everywhere; let it die.

          And for those that mention 'small' platforms (smartphones, etc), before long or even already, they will be fully 64-bit too.

          I remember similar arguments BITD about using 15/16-bit colour instead of 32-bit colour, since it was so much faster. Probably was a little faster, but it really messed with developers having to code for so many different possibilities. Fast forward a few years, and now everyone uses 32-bit colour, and you would be considered weird to even suggest using 16-bit instead. The hardware became fast enough to handle it, and the coding became a lot easier because of it.

          64-bit is the future; we are never going back to 32-bit. People should just accept that and move on.

          And before anyone suggests I'm confusing x32 and 32-bit libs, I'm not. I'm a developer that has to deal with these issues, and I know the difference. But these constant hacks to keep 32-bit around are seriously hampering us. Let's move to the future together. I 110% agree with dropping this from the kernel.
          Except that 16bits colors vs 32bits makes a difference from a user point of view. 64 bits doesn't change anything for the user, except more RAM is used. I'm tired to see how software are more and more bloated. My computer is 16x more powerful than 10 years ago, and I do absolutely nothing more with it. Proprietary software aren't optimized because they don't care about it. F*ck them. But free software should be leading, have blazing fast performance even on 4GB RAM, which should definitely be enough.
          Last edited by Flaburgan; 12 December 2018, 10:07 AM.

          Comment


          • #55
            Originally posted by s_j_newbury View Post
            More memory just makes developers and system administrators lazier.
            True, except that's usually not the developers who are lazy (as we love beautiful optimized code) but hierarchy who doesn't give them time and resources to write good code. Thank you, profit-driven companies.

            Comment


            • #56
              Originally posted by sa666666 View Post
              Looks like I triggered the local autist troll. Nowhere did I say that I couldn't do it, just that with all the extra work that it entails, no developer (in 2018) SHOULD HAVE TO DO IT. Do you write cross-platform code? Do you have any idea how complicated it can be to make sure everything works the same way everywhere? How many things have to be tested each time you introduce another variable into the equation? One has to test 32 and 64-bit in Windows, 64-bit only (thankfully) in OSX, and 32 and 64-bit in Linux (and various Linux versions, to boot). Now we have YET ANOTHER alternative to worry about.
              What? That's QA's job.

              Originally posted by sa666666 View Post
              It's not about ability of the developer to do it. It's about WHY SHOULD THEY NEED TO???
              Because it's faster for the users and is better software quality. The user of the software (can be developer himself) is king. Really simple as that.

              Originally posted by sa666666 View Post
              64-bit is what is the standard, and probably will be for quite some time. Why go backwards? I remember this exact same shit being argued when 32-bit was a new thing. You had to drag some 16-bit developers kicking and screaming into the 32-bit era. Some people are just really resistant to change in any form.

              EDIT: And I'm not advocating for Intel 64-bit systems exclusively. I'm advocating for 64-bit everywhere instead of mixing in 32-bit. I recognize that 32-bit is still in use in many places, and it will take years to fully port it all. But we should be moving forward and deprecating 32-bit support, not inventing new ways to keep it around.

              From that POV, also sounds exactly like the IPv4 vs. IPv6 debate. Fear of change.
              WTF are you even talking about? x32 is NEWER than "64-bit". You're just babbling nonsense.

              This has nothing to do with fear of changes, you just don't like it because you're a lazy mofo and want what's easiest for you at the expense of your users. "Fear of change" is just a babbling scapegoat.

              Comment


              • #57
                Originally posted by Jabberwocky View Post
                Please tell me more. I want to learn to solve complex low level caching and bandwidth optimization problems by writing abstract portable code.

                Is Java better than x32?
                You don't need assembly language to organize your data better (and structs).

                I won't tell you more since you're clearly a dummy.

                Comment


                • #58
                  Originally posted by wizard69 View Post
                  Your limited RAM goes a lot farther when there is only one set of libs loaded into it.
                  Your limited RAM is the CPU's Cache. A 32-bit pointer wastes half the cache of a 64-bit pointer. Nobody cares that you loaded a different version of the library in RAM, because it's about the CACHE.

                  Comment


                  • #59
                    Originally posted by RealNC View Post
                    No. The kernel will still support 32-bit. Note that x32 is NOT 32-bit. It's a special version of 64-bit.

                    In fact, it's highly unlikely that the kernel you are running right now even has x32 support enabled. And your system most probably doesn't even have x32 libraries installed. None of the popular 64-bit distributions ship with x32 support. They only ship 32-lib libraries.
                    My kernel has 32-bit enabled and is running a multilib system. I know because I compiled the kernel myself (running Gentoo) and have the 32-bit ABI enabled using the abi_x86_32 USE flag, as it's more or less required..

                    Comment


                    • #60
                      X32 is essentially a separate architecture. In order to make use of X32 ABI your whole stack should be compiled with the X32 ABI support, starting from the kernel (CONFIG_X86_X32). That's the reason the use of X32 in the real world is so limited. I challenge you to find a Linux distro compiled for X32. There are not that many. At this point, the best choice is perhaps the Debian X32 port or better yet Gentoo. In Gentoo, of course, you end up compiling everything on your own. But things are not guaranteed to work because, well again, almost nobody uses X32.

                      Comment

                      Working...
                      X