Announcement

Collapse
No announcement yet.

Intel Publishes "X86-S" Specification For 64-bit Only Architecture

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by user1 View Post

    And that's why I don't understand the hype around ARM / RISC-V. Maybe those who're hyped about ARM / RISC-V don't give a damn about backwards compatibility, but it's something I deeply care about.
    Honestly I don't think that's much of a problem in this case. Arguably I haven't read the details so I don't know if those new CPUs will retain the Compatible mode (e.g. 32-bit processes on a 64bit kernel) but if not, and if you still have some 32-bit only software that you need to run, then DOSBox or qemu will do the trick. The emulated CPU's performance is likely to be at least as good as the x686-class CPUs the software was intended to run on.

    Comment


    • Originally posted by microcode View Post
      128-bit addressing does not necessarily mean 128 bits of address on the memory bus, just as right now all 64-bit processors use fewer than 64 bits of address (typically 48-57 bits)
      Sure, but it still comes with a burden. All registers must be 128 bit as well as all data buses and all integer operations. There is a reason why we dont have 64 bit address buses, it costs transistors, die area, power and pins and quiet a lot of that.

      If a single programm won't need more than 16 EB in the future, PAE might be a much better solution. Not sure if I live long enough to whitness the final solution.

      Comment


      • Originally posted by ryao View Post
        Intel needs to find a way to put more transistors into making their cores better without the luxury of a process node advantage. Killing backward compatibility with FreeDOS is one way to do that. It is not clear if it will be enough.
        A few thousend transistors? That wouldn't even be enough for adding 1 KB of cache. That's not even worth the trouble of loosing backwards compatibility.

        Comment


        • Originally posted by hajj_3 View Post
          too late. Risc-V 64bit is going to eat x86 for breakfast.
          Timeline?

          Comment


          • Originally posted by Tomin View Post

            Is that used for anything but BIOS (or UEFI) updates? I'd imagine that those wouldn't get these x86-S processors anyway so it's not really a problem.
            I recall seeing it used for SSDs in the past.

            Comment



            • Originally posted by ssokolow View Post

              Aren't those just for representing the full output range of non-wrapping, non-saturating versions of operations such as multiplying two 64-bit integers?
              It could be used that way, but it is also consistent with the industry's expectation for a 128-bit transition. I have been told in the past to keep 128-bit architectures in mind since they are planned.

              Comment


              • Originally posted by Anux View Post
                If you apply that thinking to the transition from 32 to 64 bit (DDR 2 just came out) we had 3 GB/s bandwidth and reading all memory that was adressable would have lastet hours. Memory speed will improve till that time comes.
                Sorry, what?

                3 GB/s would read a 32-bit address space in a little more than a second.

                You seem to be one of those who doesn't understand the scale of numbers. 64-bit is way, way, way larger than 32-bit. It's far, far, far, bigger than the transition from 16-bit to 32-bit.

                Create a 64 KB file. Pretty small file. Now look at each byte in it. Each byte is another 64 KB file. That's how big 4 GB is compared to 64 KB. It's huge.

                Now create a 4 GB file. Now look at each byte in it. Each byte is another 4 GB file, not 64 KB. It grows literally exponentially (not quadratic, but exponentially). That's how big 64-bit is compared to 32-bit.

                So with 4 GB/s (I know it's slow compared to today's standards), it would be equivalent of reading a 4 GB file one byte per second, that's how long it will take to read a 64-bit address space just once. It would take you 136 years to do it. 4 GB/s btw.

                Heck, 192-bit is enough to have way more unique addresses then atoms on Earth.
                Last edited by Weasel; 23 May 2023, 01:35 PM.

                Comment


                • Originally posted by Weasel View Post
                  3 GB/s would read a 32-bit address space in a little more than a second.
                  Missunderstanding, I was talking about reading 64 bit (or 40 bit in those days) address space with the techonlogy we had at this time.

                  2^40 / 3 GB/s = ~5 min with 48 bits (first Opteron) it's allready hours.

                  Create a 64 KB file. Pretty small file. Now l ....
                  You could have explained it much easier with every bit doubles the range of numbers ...

                  Comment


                  • Originally posted by Anux View Post
                    You could have explained it much easier with every bit doubles the range of numbers ...
                    Sure, but that doesn't really help people visualize it. Most people don't understand exponential growth (compounding as a consequence). They can't visualize stuff like "doubling every year" and how fast it grows.

                    If I say that every byte in a 4GB file is 4GB file itself, they can visualize it by "zooming" into each byte, and then realize how huge it is.

                    Comment


                    • Totally agree. As soon as possible.

                      Comment

                      Working...
                      X