Announcement

Collapse
No announcement yet.

Linux Kernel Developers Discuss Dropping x32 Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by sa666666 View Post
    Why not concentrate on x86_64 exclusively...
    Because when x86_64 appeared most users were on x86 for many years, i would say even about decade One decade to enrich majority and another decade of demoting is needed for 32-bit... nothing really changes without 2 decades

    64-bit is the future; we are never going back to 32-bit. People should just accept that and move on.
    Nope - 64-bit is considered as standard today, while 128-bit is future

    Supercomputers will start to strugle with 64-bit in the next decade, so it is better to do 128-bit sooner rather than later
    Last edited by dungeon; 11 December 2018, 05:34 PM.

    Comment


    • #22
      dungeon Why is more width inherently better? Doesn't it depend on the nature of the problem and algorithm?

      Comment


      • #23
        Captain here, there is no such thing as x32. Kernel developers must be silly. There's only x86, in which the x stands for various numbers of Intel processors, like the 80386 and 80486. They all end with 86, that's where the naming scheme comes from. Intel never had 80432 or anything the like, so the x32 name is very wrong. It's supposed to be 32-bit, no x involved.

        Comment


        • #24
          Some posts are funny...

          Those CERN slides are interesting, thanks. So, as a non developer, there is a thing i did not understand. Does it means that X32 may disappear or it would just become a set of libs to be built separately ?

          Comment


          • #25
            Originally posted by s_j_newbury View Post
            dungeon Why is more width inherently better? Doesn't it depend on the nature of the problem and algorithm?
            Same better as 64-bit is to 32-bit. Because everything is bound by something and plays according to its limitations. Once at least one man hits 64-bit boundware, demand for 128-bit will happen. Wider is always better, as some supercomputers are expected to hit boundware in decade or so.

            Who knows, maybe someone thinking of keeping 32-bit up to year 2038.-bug, and then to switch stright to 128-bit, while ignoring 64-bit altogether

            You see, in 20 years we would talk about who in the hell needs this old 64-bit - same way as today we talk about this And about who in the hell needs CoC 2

            I started my computing with 8-bit computer many decades ago, so nothing surprises me here
            Last edited by dungeon; 11 December 2018, 06:20 PM.

            Comment


            • #26
              Originally posted by dungeon View Post

              Same better as 64-bit is to 32-bit. Because everything is bound by something and plays according to its limitations. Once at least one man hits 64-bit boundware, demand for 128-bit will happen. Wider is always better, as some supercomputers are expected to hit boundware in decade or so.

              Who knows, maybe someone thinking of keeping 32-bit up to year 2038.-bug, and then to switch stright to 128-bit, while ignoring 64-bit altogether

              You see, in 20 years we would talk about who in the hell needs this old 64-bit - same way as today we talk about this And about who in the hell needs CoC 2

              I started my computing with 8-bit computer many decades ago, so nothing surprises me here
              But this doesn't make sense unless you can fully utilise all those bits most of the time. If you could have twice the number of half-width computation units doing work in parallel you win over doing half the work with half the bits unused (excess precision). The reason SIMD is a win is it allows parallel computation with wide registers when your algorithm allows it. SIMD registers are not general purpose though and have a much higher overhead and more complex computation units attached.

              Using 128 bit pointers just seems completely insane to me. 32 bit is enough for most software. There's a reason current designs do not even use 64 bits for the address bus!

              Comment


              • #27
                Originally posted by jabl View Post
                What happens if said non-x32 library allocates some object and it happens to get an address outside the 32-bit range, and then returns the address of that object to the calling x32 function?

                So I think you're wrong. You might make it work in some cases, but not in the general case, and thus I'd be very surprised if anyone would commit to supporting such a combination.
                They should just add a syscall that prevents memory map of address space above 4 GiB. Then you won't be able to get pointers above 4 GiB.

                Comment


                • #28
                  Originally posted by sa666666 View Post
                  I'm a developer that has to deal with these issues, and I know the difference. But these constant hacks to keep 32-bit around are seriously hampering us. Let's move to the future together. I 110% agree with dropping this from the kernel.
                  No, what's faster is the future, not slower. We don't progress forward by catering to lazy pieces of shit.

                  Comment


                  • #29
                    Originally posted by sa666666 View Post
                    Do non-developers realize how much of a F(&*%'ing burden it is having to maintain multiple toolchains, etc? Every time you offer a new alternative such as this, you double the amount of work required.
                    No. And FYI I'm a developer. If you can't write code that can transparently deal with both, you just suck.

                    Comment


                    • #30
                      Originally posted by dungeon View Post
                      Nope - 64-bit is considered as standard today, while 128-bit is future

                      Supercomputers will start to strugle with 64-bit in the next decade, so it is better to do 128-bit sooner rather than later
                      I wanted to prove you wrong again because you clearly don't understand the scale of the numbers you are speaking of.

                      But it's clear you don't read by this point and just keep repeating the same stuff so it's like talking to a wall.

                      Comment

                      Working...
                      X