Announcement

Collapse
No announcement yet.

CISC vs RISC

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • CISC vs RISC

    True CISC processors like a 486 had a frontend that is like a script interpreter and true RISC processors like a Pentium Pro had a frontend that is like a runtime compiler...

    On the 486 there was a ROM that stored all the microcode Each microcode was essentially a tiny app for programming hardware registers. Each x86 instruction was essentially a script of microcodes that had to be capable of completing in 1 cycle. It didn't matter how long instructions were as long as they could complete in one cycle. It was 1 cycle because each instruction had nonlinear interdependencies between the microcode components and that made it impossible to pipeline the architecture.

    On the Pentium Pro there was a decoder that translated the old instructions directly into micro-ops. Almost all of them could be translated into a direct 1 to 1 equivalent. The R in RISC means reduced, but reduced does not mean smaller or simpler. Reduced means that all the microcode that used to make up an instruction got compiled into a single self standing independent micro-op. And because of the nature of what micro-ops are they are much bigger than microcodes used to be. They do not have the interdependency problems that microcode had so the architecture could be pipelined, it could be out of order, it could be superscalar.

    EDIT: This is why I keep stressing the fact that x86 instructions get decoded into direct 1 to 1 micro-ops... It's the ACTUAL definition of RISC..

    EDIT: You can't call the frontend CISC and the backend RISC, that's not how it works. The frontend and the backend work together to make a complete architecture. The frontend decodes instructions into micro-ops and the backend executes them. The -entire- architecture is RISC.
    Last edited by duby229; 18 July 2024, 11:20 AM.

  • #2
    I agree that saying a CPU has a "CISC frontend" and a "RISC backend" doesn't make much sense (because microarchitectures of performance designs haven't neatly corresponded to the ISA for looong time now; and because a CPU can only be said to be "RISC" if it exposes a RISC ISA to program with), but I'm not sure what you're trying to say here? What would you like us to discuss on?

    Comment


    • #3
      Interesting discussion! It's true that modern CPU architectures have evolved beyond the simple CISC vs RISC debate. While the x86 architecture traditionally employs a complex instruction set (CISC), modern implementations often translate these instructions into simpler micro-operations that resemble RISC principles. This allows for more efficient execution and better performance through techniques like pipelining and out-of-order execution.

      One thing worth noting is that this hybrid approach leverages the benefits of both architectures. The CISC front end allows for a richer set of instructions, which can be more efficient for certain tasks, while the RISC-like back end enables more streamlined processing. It's fascinating to see how these technologies blend to create the powerful processors we use today. What are your thoughts on how this hybrid architecture impacts modern computing performance?

      Comment


      • #4
        I just wish consumer processors would adopt Big Endian. Little Endian exists purely for the sake of making CPU design easier, but we're far beyond that stage of the industry. Now Little Endian haunts us purely as technical debt while we could fully benefit from Big Endian with our modern design systems. It doesn't matter so much now since nobody really even takes endianness into account anymore, but it does pop up here and there, especially when doing atomics and SIMD operations.

        Comment


        • #5
          Originally posted by crystalchuck View Post
          I agree that saying a CPU has a "CISC frontend" and a "RISC backend" doesn't make much sense (because microarchitectures of performance designs haven't neatly corresponded to the ISA for looong time now; and because a CPU can only be said to be "RISC" if it exposes a RISC ISA to program with), but I'm not sure what you're trying to say here? What would you like us to discuss on?
          Architecture is the ISA and microarchitecture is the implementation. I can agree with calling x86 the ISA a CISC architecture. But a modern x86 processor like an Epic or a Xeon is a RISC microarchitecture.

          I tend to use architecture and microarchitecture interchangeably, but my usage in the post above is incorrect. It is true that the instruction set is CISC (architecture). But it is also true that the modern implementations of it are RISC (microarchitecture).

          Comment


          • #6
            Originally posted by duby229 View Post

            Architecture is the ISA and microarchitecture is the implementation. I can agree with calling x86 the ISA a CISC architecture. But a modern x86 processor like an Epic or a Xeon is a RISC microarchitecture.

            I tend to use architecture and microarchitecture interchangeably, but my usage in the post above is incorrect. It is true that the instruction set is CISC (architecture). But it is also true that the modern implementations of it are RISC (microarchitecture).
            The "IS" part in both acronyms refers to "instruction set", not "microarchitecture". No advanced processor design of the last couple decades has "hardwired" instructions as far as I know. The concept of micro-operations/abstracting instructions has been around a very long time, and as such neither a CISC nor RISC ISA necessarily imply a specific microarchitecture, which is why they are not suitable terms to describe them. I also don't think it makes sense to lump in the concept of ISA with internal workings of the CPU which are not exposed to developers or end users. Finally, micro-operations are so opaque that we don't even know how many there are, how they are assembled and reordered, what they do exactly and so on, so I'm not sure how you could even know enough about them to qualify them as RISCy.
            Last edited by crystalchuck; 31 July 2024, 11:35 AM.

            Comment


            • #7
              Originally posted by crystalchuck View Post

              The "IS" part in both acronyms refers to "instruction set", not "microarchitecture". No advanced processor design of the last couple decades has "hardwired" instructions as far as I know. The concept of micro-operations/abstracting instructions has been around a very long time, and as such neither a CISC nor RISC ISA necessarily imply a specific microarchitecture, which is why they are not suitable terms to describe them. I also don't think it makes sense to lump in the concept of ISA with internal workings of the CPU which are not exposed to developers or end users. Finally, micro-operations are so opaque that we don't even know how many there are, how they are assembled and reordered, what they do exactly and so on, so I'm not sure how you could even know enough about them to qualify them as RISCy.
              You're making a whole lot of sense. I don't think I can say that any better.

              EDIT: Although, there are just a handful of people in the world that know how to write microbenchmarks such that they can map out block diagrams of microarchitecture... There are a few people who know what these things look like on the inside...
              Last edited by duby229; 31 July 2024, 11:58 AM.

              Comment


              • #8
                Originally posted by foofaraw View Post
                Interesting discussion! It's true that modern CPU architectures have evolved beyond the simple CISC vs RISC debate. While the x86 architecture traditionally employs a complex instruction set (CISC), modern implementations often translate these instructions into simpler micro-operations that resemble RISC principles. This allows for more efficient execution and better performance through techniques like pipelining and out-of-order execution.

                One thing worth noting is that this hybrid approach leverages the benefits of both architectures. The CISC front end allows for a richer set of instructions, which can be more efficient for certain tasks, while the RISC-like back end enables more streamlined processing. It's fascinating to see how these technologies blend to create the powerful processors we use today. What are your thoughts on how this hybrid architecture impacts modern computing performance?
                I generally agree with you, except I just don't like the CISC frontend RISC backend terminology.

                RISC isn't about small or simple instructions. RISC instructions can be almost just as complex as CISC instructions. A CISC can be any complexity as long as it can complete in one cycle. A RISC instruction can be any complexity as long as it can be decoded into a single micro-op.

                The old school x86 CISC architectures like the 486 didn't have true frontends. The concepts introduced with RISC i.e. reducing instructions into a single micro-op, that's what introduced the concept of a frontend and a backend. The idea itself of a frontend and a backend only exists on pipelined RISC microarchitectures. There are 4 basic stages in a pipeline, the first 2 are considered to be the frontend and the last 2 are considered to be the backend. But it is all RISC from the first stage to the last. I have no idea how many stages modern x86 CPUs have but I imagine they must be like 30 or 40 stages long by now.
                Last edited by duby229; 31 July 2024, 02:30 PM.

                Comment


                • #9
                  Originally posted by duby229 View Post
                  Architecture is the ISA and microarchitecture is the implementation. I can agree with calling x86 the ISA a CISC architecture. But a modern x86 processor like an Epic or a Xeon is a RISC microarchitecture.

                  I tend to use architecture and microarchitecture interchangeably, but my usage in the post above is incorrect. It is true that the instruction set is CISC (architecture). But it is also true that the modern implementations of it are RISC (microarchitecture).
                  Wrong. There's nothing reduced about the micro-ops. In fact, some micro ops are even fused (compare-and-jump) where you have a single specialized instruction in the micro-ops instead of two in CISC. How is that RISC again?

                  The only thing it has remotely close to RISC is that all the micro-ops are not variable-length but fixed-length in encoding. That's not "reduced", it just happened to be that way on RISC archs.

                  Comment


                  • #10
                    Originally posted by Weasel View Post
                    Wrong. There's nothing reduced about the micro-ops. In fact, some micro ops are even fused (compare-and-jump) where you have a single specialized instruction in the micro-ops instead of two in CISC. How is that RISC again?

                    The only thing it has remotely close to RISC is that all the micro-ops are not variable-length but fixed-length in encoding. That's not "reduced", it just happened to be that way on RISC archs.
                    See and this is where you are misunderstanding micro-op fusion. See modern microarchitectures are both wide and long, as such they can do a lot more than old x86 architectures could possibly have conceived of. There are scenarios where a modern microarchitecture can do IO and math in parallel. Micro-op fusion is even further reduction from 1 per micro-op to multiple instructions per micro-op in certain situations....

                    Which is all handled internally by the frontend. You as a programmer just have to be aware of the compiler and userspace for when it can happen and then program for that.
                    Last edited by duby229; 02 August 2024, 11:26 AM.

                    Comment

                    Working...
                    X