Announcement

Collapse
No announcement yet.

Intel Continues Prepping The Linux Kernel For X86S

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Originally posted by dragorth View Post

    Each of these architectures have the equivalent of SIMD instructions. These are CISC instructions. RISC philosophy was supposed to be the bare minimum of instructions, simpler instructions that could be combined to do the same thing at faster speeds

    ARM has Thumb and Neon as an older version, RISCV are working on the P extension, PowerPC has Altivec and SPE, UltraSparc has VIS and VIS2, MIPS had MDMX, HP's PA-RISC had MAX, and the Cell Processor SPU.

    This is just one set of instructions, every processor that includes HW Virtualization tech has a whole new set of CISC instructions that is used to accelerate the translation of memory addresses.

    Arm, in the form of Apple Silicon, now has a set of instructions to accelerate translation of specifically x86/x86-64 applications. This is another set of CISC instructions.

    Almost all main CPUs now contain 2 CPUs, one a security CPU that runs a hidden OS. Intel and AMD have an ARM core in theirs, and ARM has its own version of the same.

    Do you really need me to go on, with things like Big.Little cores, none of these options clocking as high as x86, which was supposed to be the point of them, and what ever else they have added, like GPUs on die?

    There are NO RISC systems today. You can find older CPU designs, but that is about it.

    Now, you seem to have fallen for the marketing that companies like ARM put out, but the reality is, RISC died. Period.
    Well, Vector instructions are not inherently CISC, they belong to the bare minimum. They cannot be replaced with other instructions at he same speed. Otherwise we would consider addition instructions as CISC, since they can be replaced by bitwise operations. The same holds true for virtualisation (btw, for speeding up the translation no extra instructions are needed, just some modifications to the page table, in order to enable 2-statge translation). Apple silicon instructions might or might not be CISC, but since they are used for emulation they can die with x86. The secondary CPUs are irrelevant to the RISC/CISC debate, as is the big.LITTLE argument.

    That being said, there might be CISC instructions in the above architectures, maybe included in the extensions you listed. Crypto come to mind. This is something reasonable though since it speeds up some common operations. For me, the boundaries between (mostly) RISC and CISC is the need for microops or large amounts of silicon in order to implement some "niche" instructions. Another CISC thing is having specialised instructions that can be replaced by more generic instructions (looking at you push/pop). My x86 hate is not only based on CISC though, although some are side effects of cisc. I also hate variable length instructions for example, which exist on RISCV C extension as well.

    Comment


    • #62
      Originally posted by marios View Post

      Well, Vector instructions are not inherently CISC, they belong to the bare minimum. They cannot be replaced with other instructions at he same speed. Otherwise we would consider addition instructions as CISC, since they can be replaced by bitwise operations. The same holds true for virtualisation (btw, for speeding up the translation no extra instructions are needed, just some modifications to the page table, in order to enable 2-statge translation). Apple silicon instructions might or might not be CISC, but since they are used for emulation they can die with x86. The secondary CPUs are irrelevant to the RISC/CISC debate, as is the big.LITTLE argument.

      That being said, there might be CISC instructions in the above architectures, maybe included in the extensions you listed. Crypto come to mind. This is something reasonable though since it speeds up some common operations. For me, the boundaries between (mostly) RISC and CISC is the need for microops or large amounts of silicon in order to implement some "niche" instructions. Another CISC thing is having specialised instructions that can be replaced by more generic instructions (looking at you push/pop). My x86 hate is not only based on CISC though, although some are side effects of cisc. I also hate variable length instructions for example, which exist on RISCV C extension as well.
      The SIMD is literally large amounts of silicon to implement some niche instructions. Most general purpose programs don't need them, and the ones that do are specialized games and 3D or Scientific computing, which isn't normally necessary on consumer hardware.

      Hardware Virtualization deals with more than just memory, it deals with translating between ring levels, encrypting memory and splitting hardware resources such as for SR-IOV.

      Crypto is a commonly used instruction, but now you have moved the goal posts of the original RISC ideal which was to get rid of all of that to have clock speeds that were much higher than CISC CPUs.

      I don't know if you have seen The Mill CPU videos? If that ever came out, you might like that one better than even the RISC ones today.

      Comment


      • #63
        Originally posted by rmfx View Post
        128 bit computing is more a philosophical thing than a real thing.
        Before you run out of 64bit memory address, you need a _single system_ that would feature so much memory that it would be like the entire humanity of produced memory during 6 months put together.

        No project within the next century will need 128bit, maybe ever because it would never be profitable, and 64 bit based cluster alternatives will always win. Especially when you know ram makers slow down their production on purpose because they make too much of it to reach the best margins.
        I talked about x86_128 as a joke.
        But never say never, it may also come.

        Comment


        • #64
          Originally posted by SigHunter View Post
          but why is it not called x86s_64 or x64s or whatever? x86s gives the impression that it is 32 bit if you otherwise know nothing about it
          If you don't know the bitness of x86s, then you probably don't care about bitness in the first place anyway.
          Originally posted by rmfx View Post
          128 bit computing is more a philosophical thing than a real thing. Before you run out of 64bit memory address, you need a _single system_ that would feature so much memory​
          While you won't be able to install 2^64 memory in a physical machine anytime soon, there are practical reasons to use a 128-bit CPU, which is that you have extra bits for when you want to use modern shenanigans like pointer validation.

          Comment


          • #65
            Originally posted by Phoronos View Post

            I talked about x86_128 as a joke.
            But never say never, it may also come.
            Sometimes, I lack some second degree, it seems.
            Well, RISC-V 128bit exists, so it might be made for I dunno what reason.

            Comment

            Working...
            X