Announcement

Collapse
No announcement yet.

Intel Publishes "X86-S" Specification For 64-bit Only Architecture

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by Weasel View Post
    I don't think we'll ever need 128-bit address space.

    Just think of the amount of time it would take you to just scan (read) once an entire 64-bits worth of RAM at current speeds. Once.
    Current architectures also only support using 48bit of the 64bit adress space. We still have room to extend the 48bit to 64bits without breaking the instruction set, by just updating arch rules for pagetables.

    Comment


    • Originally posted by Anux View Post
      It might be possible that some form of PAE will be enough and far outweights the negatives we get from an 128 bit bus. Because we will never reach the max limit of 128 bit address range.
      Maybe we stay at 64 bit and just increase the "cluster size" for example we address only quad words and not single bytes.

      If you apply that thinking to the transition from 32 to 64 bit (DDR 2 just came out) we had 3 GB/s bandwidth and reading all memory that was adressable would have lastet hours. Memory speed will improve till that time comes.
      128-bit addressing does not necessarily mean 128 bits of address on the memory bus, just as right now all 64-bit processors use fewer than 64 bits of address (typically 48-57 bits).

      Comment


      • Originally posted by Weasel View Post
        I don't think we'll ever need 128-bit address space.

        Just think of the amount of time it would take you to just scan (read) once an entire 64-bits worth of RAM at current speeds. Once.
        Once we reach 16EB of RAM, we will need a bigger address space to address it due to memory mapped IO using address space. That is the same reason we needed a 64-bit address space to address 4GB of RAM.

        Anyway, you are right to think that 128-bit is not an inevitability. If we fail to continue increasing density before reaching 2^64 bytes of RAM in high end machines, the 128-bit transition will not happen. However, the industry currently expects it to happen in the future. That is why RISC-V has a yet to be defined rv128 variant, and system languages like C, C++ and Rust made preparations for the transition in their type systems. e.g. uint128_t in C/C++ and u128 in Rust.

        That said, while it is true that storage density growth typically outpaces storage bandwidth growth, it are wrong to do reasoning about future memory capacities based on current memory bandwidth. The industry is also okay with the two growing at different rates.
        Last edited by ryao; 22 May 2023, 12:47 PM.

        Comment


        • Originally posted by carewolf View Post

          Current architectures also only support using 48bit of the 64bit adress space. We still have room to extend the 48bit to 64bits without breaking the instruction set, by just updating arch rules for pagetables.
          Intel is already at 57-bit support through 5 level page tables:



          Future intel processors with 6 level page tables should support the full 64-bit address space in hardware.

          Comment


          • 20230522_104902.jpg
            Is it really so expensive to keep backwards compatibility?

            Comment


            • Originally posted by Mark Rose View Post
              20230522_104902.jpg
              Is it really so expensive to keep backwards compatibility?
              Historically, Intel had a process node advantage that allowed them to outmaneuver their competitors by using at least double the transistors, such that they used them on things not meant to improve performance like IBM PC compatibility while still outperforming their competition. This was such a competitive advantage that other companies stopped trying to compete (with the exception of AMD). Now that their process node advantage is gone, you see others developing cores that are comparable to Intel’s or even better. Intel needs to find a way to put more transistors into making their cores better without the luxury of a process node advantage. Killing backward compatibility with FreeDOS is one way to do that. It is not clear if it will be enough.

              Comment


              • Originally posted by ryao View Post
                Certain things use freedos to update their firmware.
                Is that used for anything but BIOS (or UEFI) updates? I'd imagine that those wouldn't get these x86-S processors anyway so it's not really a problem.

                Comment


                • Originally posted by muncrief View Post
                  Intel lost the right to have any say in the future of microprocessor architecture when they tried to force Itanium on the globe, and then spent decades charging 4 to 6 times reasonable cost for pitiful two or four core microprocessors out of spite.

                  In fact if not for AMD we'd be paying $4,000+ for a crappy four core Intel microprocessor at this very moment.

                  Add their horrific corporate history of destroying any company or engineer who dare challenge their thievery and Intel has earned only one thing -

                  The right to pound sand.
                  I bought a 6 core Core i7 970 for $600 in 2010, then a 6 core Core i7 5820K for $380 in 2014. Last I checked now AMD’s the one charging $4000+ for ThreadRipper Pro chips.

                  Comment


                  • Originally posted by WannaBeOCer View Post

                    I bought a 6 core Core i7 970 for $600 in 2010, then a 6 core Core i7 5820K for $380 in 2014. Last I checked now AMD’s the one charging $4000+ for ThreadRipper Pro chips.
                    And the funny part is that price drop happened despite virtually 0 competition from AMD. Intel delivered Core 2 Duo, Nehalem, Sandy Bridge and basically had overwhelming advantage. And despite of that price drop happened.

                    Comment


                    • Originally posted by ryao View Post
                      and system languages like C, C++ and Rust made preparations for the transition in their type systems. e.g. uint128_t in C/C++ and u128 in Rust.
                      Aren't those just for representing the full output range of non-wrapping, non-saturating versions of operations such as multiplying two 64-bit integers?

                      Comment

                      Working...
                      X