Announcement

Collapse
No announcement yet.

ARM Launches "Facts" Campaign Against RISC-V

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #81
    Originally posted by oiaohm View Post
    https://riscv.org/membership/3278/bitmain/
    Bitmain ASIC are in fact Risc-v with custom accelerators. This does kind of explain why Nvidia is looking at having a risc-v inside their gpu for processing. So Bitmain is in fact a Risc-v equipment developer.
    I'm not going to argue against bullshit so... Prove it.

    Your link says nothing about it by the way, other than Bitmain being a member (they could just use micro controllers in RISC V which don't matter).

    Did you know Apple is a member of Khronos Group (who design Vulkan)? Proof that Apple uses Vulkan!!! Oh wait.

    Comment


    • #82
      Originally posted by blargh4 View Post
      The key idea of RISC is that the instruction set is designed to pipeline effectively for increased throughput, not that it can't encode many complex operations. Compared to typical CISC machines of the day, the RISC designs were a performance win.
      Already linked you where you are dead wrong. CISC split up instructions to uops way before "RISC" (quotes for emphasis) was mainstream.

      Did you guys know RISC originally meant that all instructions on register operands (not load/stores) execute in 1 clock cycle? Yeah... But keep redefining that, maybe someday you'll end up with one definition that says the "entirety" of x86 is RISC. Or maybe ASICs are RISC too.

      Comment


      • #83
        Originally posted by coder View Post
        Wow, you're already writing off the future of one of the most successful and ascendant semiconductor companies of the past decade?

        ARM has been very aggressive in rolling out new cores and addressing new markets. I don't see them sitting idly by and completely missing some huge macro-trend that would stop them in their tracks. They've got a whole range of cores from microcontrollers up to laptop and potentially HPC, GPUs, machine learning blocks, and computer vision blocks.
        Won't be for long. That they had to do that campaign proves it.

        Comment


        • #84
          Originally posted by Weasel View Post
          You're missing the point. The difference between CISC and RISC is that CISC tries to implement more functionality in the hardware, instead of letting software do that using multiple instructions. ASICs are way more "hardcore" than CISC in terms of hardware specific implementation, but it doesn't change the fact that they use more specialized hardware/instructions than even CISC. Use a little brain, you'll figure it out.

          Someone earlier mentioned a CISC CPU that "calculated polynomials" directly on the hardware. Well guess what? That's what ASICs do. A RISC would "calculate" something in software using instruction parallelism but simple operations/instructions, nothing specific to polynomial calculation (just multiplies and reciprocals and stuff). And this can't compete with dedicated hardware or instructions.

          So in terms of performance: ASIC > CISC > RISC

          But ASIC is not programmable, so it's still inferior to CISC in many ways.

          For example, a CISC CPU like x86 has a "fast reciprocal square root" instruction, which is much faster than doing it in software but gives an approximated result (to do it in software you'd have to use an iterative method). It's only 5 cycles latency, when a floating point multiply is 3 cycles! Ofc I wouldn't be surprised if ARM implemented something like this as well, since it's a hybrid and not really RISC (it had to compete with x86). Such an instruction is "useless" because it can be implemented with other "simpler" instructions in RISC, but it would be much slower.

          That's why true RISC sucks.
          Hmmm, I wonder what the likelier possibility is... that somehow a generation of CPU architecture researchers thought they could design a CPU with strong floating point performance by stringing together sequences of integer ops because hardware-acceleration is somehow anathema to the RISC idea, or that you're attacking a strawman?

          Again, what's mainly being reduced by RISC, conceptually, is not functionality but coupling of memory access (and the attendant addressing modes) and execution.

          Comment


          • #85
            Originally posted by Weasel View Post
            You're missing the point. The difference between CISC and RISC is that CISC tries to implement more functionality in the hardware, instead of letting software do that using multiple instructions. ASICs are way more "hardcore" than CISC in terms of hardware specific implementation, but it doesn't change the fact that they use more specialized hardware/instructions than even CISC. Use a little brain, you'll figure it out.

            Someone earlier mentioned a CISC CPU that "calculated polynomials" directly on the hardware. Well guess what? That's what ASICs do. A RISC would "calculate" something in software using instruction parallelism but simple operations/instructions, nothing specific to polynomial calculation (just multiplies and reciprocals and stuff). And this can't compete with dedicated hardware or instructions.

            So in terms of performance: ASIC > CISC > RISC

            But ASIC is not programmable, so it's still inferior to CISC in many ways.
            ASICs are programmable, and have been for quite some time.

            For example, a CISC CPU like x86 has a "fast reciprocal square root" instruction, which is much faster than doing it in software but gives an approximated result (to do it in software you'd have to use an iterative method). It's only 5 cycles latency, when a floating point multiply is 3 cycles! Ofc I wouldn't be surprised if ARM implemented something like this as well, since it's a hybrid and not really RISC (it had to compete with x86). Such an instruction is "useless" because it can be implemented with other "simpler" instructions in RISC, but it would be much slower.

            That's why true RISC sucks.
            Wtf are you on about? That square root instruction you're talking about is executed by software. It's a program running that executes it. Software. There is no "faster than doing it in software", because it's ALWAYS done in software.

            Comment


            • #86
              Originally posted by tjukken View Post
              ASICs are programmable, and have been for quite some time.
              Seriously? Even the name says it. "Application specific integrated circuit"

              You can easily know that ASICs are not programmable, because you look at crypto coins like Monero who do hard forks often just to break ASICs designed for it, and enable people to mine it on their GPU rigs, otherwise they'd be completely destroyed by ASICs (like with Bitcoin).

              Originally posted by tjukken View Post
              Wtf are you on about? That square root instruction you're talking about is executed by software. It's a program running that executes it. Software. There is no "faster than doing it in software", because it's ALWAYS done in software.
              That first question right back at you.

              Software does not execute anything. Software is data. Code encoded as data. The CPU executes this data as instructions. That square root instruction tells the CPU to use its specially-crafted hardware for it and spill out a result. The CPU needs to have either dedicated hardware wiring for it, OR emulate it with other micro-ops (but that's slow). In this case, it's not emulated with other micro-ops, it has a dedicated micro-op that is linked to this dedicated hardware for it.

              In true RISC, you don't have such instructions at all, you have to use multiply/addition/reciprocal and other basic instructions to achieve the same thing, but much slower. You have to do Newton-Raphson with those add/multiply, and it's much slower. In fact, just 2 multiplies (which is just one round of NR) is already more latency than the reciprocal hardware instruction. And the hardware instruction has an accuracy of 2 NR steps...
              Last edited by Weasel; 11 July 2018, 01:43 PM.

              Comment


              • #87
                Originally posted by Weasel View Post
                Use a little brain, you'll figure it out.
                You're funny. I've been involved in CPU design for 20 years. I know how to use my brain and how CPU and ASIC are designed, a knowledge you obviously lack.
                Last edited by ldesnogu; 11 July 2018, 02:25 PM. Reason: Typo

                Comment


                • #88
                  Originally posted by Weasel View Post
                  Seriously? Even the name says it. "Application specific integrated circuit"

                  You can easily know that ASICs are not programmable, because you look at crypto coins like Monero who do hard forks often just to break ASICs designed for it, and enable people to mine it on their GPU rigs, otherwise they'd be completely destroyed by ASICs (like with Bitcoin).
                  Sigh. Just google it..

                  That first question right back at you.

                  Software does not execute anything. Software is data. Code encoded as data. The CPU executes this data as instructions. That square root instruction tells the CPU to use its specially-crafted hardware for it and spill out a result. The CPU needs to have either dedicated hardware wiring for it, OR emulate it with other micro-ops (but that's slow). In this case, it's not emulated with other micro-ops, it has a dedicated micro-op that is linked to this dedicated hardware for it.

                  In true RISC, you don't have such instructions at all, you have to use multiply/addition/reciprocal and other basic instructions to achieve the same thing, but much slower. You have to do Newton-Raphson with those add/multiply, and it's much slower. In fact, just 2 multiplies (which is just one round of NR) is already more latency than the reciprocal hardware instruction. And the hardware instruction has an accuracy of 2 NR steps...
                  Software tells the CPU what to do. RISC, CISC, it's all the same in that regard.

                  Comment


                  • #89
                    Originally posted by ldesnogu View Post
                    You're funny. I've been involved in CPU design for 20 years. I know how to use my brain and how CPU and ASIC are designed, a knowledge you obviously lack.
                    https://www.logicallyfallacious.com/...alse-Authority

                    I can claim anything I want as well.

                    Comment


                    • #90
                      Originally posted by tjukken View Post
                      Sigh. Just google it..
                      Google what? I got the Wikipedia page for ASIC and the first statement pretty much nullifies this. Maybe you're confusing them with FPGAs. By programmable, I mean the code (instructions) itself, not just tweak a few settings.

                      Originally posted by tjukken View Post
                      Software tells the CPU what to do. RISC, CISC, it's all the same in that regard.
                      No they're not the same. Software tells the CPU what to do in the CPU's "language". The language is RISC or CISC. RISC means the language is simple with few words of equal length (clock latency, not encoding) that can be combined for something you want to do. CISC means the language is complex with a lot of words and some more specialized than others, and each word takes a different time to process (again, clock latency).

                      One such CISC word can mean an entire phrase, such as the inverse reciprocal square root approximation I was talking about.

                      The difference is that the CPU understands the language/words and has dedicated parts of its brain dealing with it (dedicated hardware for the instructions).
                      Last edited by Weasel; 11 July 2018, 03:06 PM.

                      Comment

                      Working...
                      X