Announcement

Collapse
No announcement yet.

Linux 2.6.36-rc5 Kernel Released; Fixes 14 Year Old Bug

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Well, the entire point of RISC is speed. Getting an older/slower RISC CPU, like Arm, would beat the point as I have a Phenom 9950 X4 CPU.

    It's probably just a nerd/geek thing to like it, but...

    Comment


    • #22
      Couple of reasons, I guess. The first is that the RISC processors behind the instruction decoder don't handle things like flow control (branches, jumps, loops etc..) since that is all implemented in the instruction decoders. The second is that each new generation of GPU would require different RISC code (since the number and type of execution units varies from one generation to the next) and without the instruction decoder presenting a consistent programming model all of your code would need to be recompiled each time up upgraded your CPU.

      GPUs share the second problem, but since the graphics world essentially uses an interpreter for state and drawing operations, plus a JIT compiler for shader programs, that hides the (constantly changing) instruction sets behind a driver layer.
      Test signature

      Comment


      • #23
        Devius's point about there not being a data path to let instructions flow directly to the processing units is also valid.
        Test signature

        Comment


        • #24
          This sucks =(

          Comment


          • #25
            Originally posted by V!NCENT View Post
            Well, the entire point of RISC is speed. Getting an older/slower RISC CPU, like Arm, would beat the point as I have a Phenom 9950 X4 CPU.
            Fine... you can get an IBM BladeCenter QS22 which has 2 PowerXCell8i CPUs at 3,2GHz in it and can run linux. You can also upgrade the cpu you have now and be happy knowing there are tiny RISC units in there Oh, and ARM is quite powerfull considering the amount of power it uses.

            Comment


            • #26
              Unless Intel/AMD standardized on a RISC format, it would mean you would need separate compilers and executables to run a program on every different model of CPU, which would suck. And if they did agree on some kind of standard, it would probably need to go through a decoding process anyway, at which point you aren't gaining much from the current CISC decoder.

              Comment


              • #27
                Originally posted by devius View Post
                Fine... you can get an IBM BladeCenter QS22 which has 2 PowerXCell8i CPUs at 3,2GHz in it and can run linux. You can also upgrade the cpu you have now and be happy knowing there are tiny RISC units in there Oh, and ARM is quite powerfull considering the amount of power it uses.
                So it all boils down to this:
                1. Am I happy to have a RISC CPU that works like CISC?
                2. Do I like raytracing more than having a car?
                3. Am I going to build a Beowold cluster of cheap ARM CPU's?

                Comment


                • #28
                  Originally posted by Qaridarium
                  thats the here and now... but in my point of view an VLIW chip can hurt an RISC one...

                  so an VLIW cpu like the HD5870 or the gtx480 are much more stronger than any CPU based on RISC with CISC emulator..
                  Intel certainly thought so when they created Itanium, but it never took off.

                  Comment


                  • #29
                    A couple of minor clarifications :

                    1. Most modern x86 CPUs have multiple "processors" behind the instruction decoder capable of executing multiple instructions in a single clock. Essentially this is VLIW behind an x86 decoder.

                    2. Most modern GPUs are SIMD (same instruction is executed on multiple data elements in parallel) but AFAIK only the ATI/AMD GPUs are VLIW *and* SIMD.

                    3. The equivalent of SIMD in an x86 processor is the SIMD instructions (3DNow, SSE etc), while the equivalent of VLIW in an x86 processor is the ability to execute multiple instructions per clock.

                    4. The problem with early VLIW processors was that they exposed the VLIW instruction set to application programs. Most current applications of VLIW hide the VLIW instruction set behind either a high level API and driver stack (GPUs) or behind a stable instruction set (with either a hardware instruction decoder or a software translator).
                    Test signature

                    Comment


                    • #30
                      Yeah, I guess VLIW would be more accurate than RISC here, in the sense that VLIW processors pretty much have to be RISC as well.
                      Test signature

                      Comment

                      Working...
                      X