Announcement

Collapse
No announcement yet.

ARM Launches "Facts" Campaign Against RISC-V

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by 89c51 View Post
    Is it just me or ARM seems to be worried a lot for something that is practically non existent.
    Companies look at trends, future forecasts.

    They need to react fast enough to catch the small rock falling down before it becomes a full landslide. Many processes are slow but have very small window of opportunity to actually influence them before they become unmanageable.

    RISC-V got a ton of support from major companies, and there are very good reasons for them to do so, that's your trend right there.

    Comment


    • #32
      Originally posted by Weasel View Post

      RISC means Reduced Instruction set, not "fixed width".
      Actually it is one and the same. RISC is simply an instruction set of a fixed word length, the minute a processor has to load multiple words of memory to complete an instruction it is no longer running as a RISC machine. Given that a. RISC processors instructions can be as advance as the designer wants to mske them. For example FMA shows up in many. RISC implementations.

      As for Intel, yes the processors are internally RISC like this has been known for some time. The old i86 instructions literally are decoded into RISC like operations feed to the execution units. Maybe not completely RISC like but then again not many RISC like processors are rigidly RISC.

      As for ARM there is nothing stopping them from going high performance. The problem is most of their cutomers dont want high performance no holds barred. Instead they want max performance per watt. Frankly Apple has shown that ARM performance can be advanced significantly while still executing the ARM instruction set.

      Comment


      • #33
        Originally posted by misp View Post
        You can look at it from another point: how good is Linux desktop when windows with declining quality has better market share.
        He will say Windows is still better. That's what happens when Microsoft sponsors universities, people coming off it tend to have their minds warped by the propaganda.

        Comment


        • #34
          Originally posted by Weasel View Post
          No RISC CPU will ever compete at performance, sorry to burst your guys' bubble.

          ARM is not even a RISC, it's a hybrid. The only RISC things it has is the large register set (which isn't even that good after 16) and the explicit load-store model. If you compare it to MIPS you'll see what a real RISC looks like, and why it's so bad at performance.
          Thats funny, because ARM dropped everything that's complex (load/store multiple) for AArch64 and adopted a read-only register 0 to simplify the Instruction set. In other words, the modern 64bit part looks alot like MIPS now, while it has 2 other decoders tucked on (32bit and thumb-mode) to keep legacy code running.

          MIPS had the servers cornered, and ARMs performance was a joke before their Cortex A8. Their demise was that SGI's management effed it up, not technical issues.

          Originally posted by Weasel View Post
          We're in an era where more and more has to be done in the hardware to get any meaningful performance. There's a reason ASICs are much more efficient than any general-purpose processor -- specialized hardware is simply superior (which is what complex instructions are all about). You can design a CPU with just one instruction (subtraction, and ability for it to work on the instruction pointer) and it can compute anything in existence, but it will be so slow rendering it unusable for anything practical.

          ARM is probably scared because they won't have anything to stand on except for "ubiquity in mobile space". Not as good as a CISC CPU (for performance) and not open like RISC-V, who would pick a middle ground that satisfies nobody? Not even in embedded applications.
          You got no clue, really. x86 for example breaks down their CISC instruction to RISC µops and caches them to get any kind of performance and power efficiency. The only thing CISC is good for is saving memory, and even thats not the case for x86 as several nowadays never-used instruction occupy the small op-codes.

          Neither does CISC mean the instructions are more complex or powerful - the S stands for SET. And funny enough there is a fast-path in x86 which basically means if you use something else your CPU will hit the brakes then will hit a truck carrying horse manure before eventually getting back to speed (syncing or even flushing the pipeline). That something else are your "complex" instructions that don't map nicely to their internal RISC architecture.

          Comment


          • #35
            Originally posted by misp View Post

            You can look at it from another point: how good is Linux desktop when windows with declining quality has better market share.
            Having lived with Windows ten for 5 months until Linux was stablecon my laptop i can safely say bith platforms have issues on the desktop these days. For example scrolling had a tendency to slow down on Windows over tome. Fedora 28 has its issues two with unstable apps and poor hardware support.

            Right or wrong Windows remains on the desktop because people want it there.

            Comment


            • #36
              Originally posted by 89c51 View Post
              Is it just me or ARM seems to be worried a lot for something that is practically non existent.
              I guess some of their customers already ponder replacing designs, and of course those talks and rumors will end up influencing investors.
              The arguments are mostly valid though, and are aimed at management investors.

              I am pretty sure their Cortex-M line will get some pressure from RISC-V soon, not so much Desktop/Phone CPUs.

              Comment


              • #37
                Originally posted by tjukken View Post
                you really should google stuff before spouting off bullshit:

                http://lmgtfy.com/?q=intel+AMD+risc+internally
                I only see links validating my statement and morons not understanding shit claiming that everywhere on the internet getting destroyed when challenged?

                For example (and is what I got with your "lmgtfy" btw): https://news.ycombinator.com/item?id=12353976

                You guys don't know jack shit about the micro-ops and parrot nonsense spread by morons in the first place. You know claims are NOT FACTS.

                The more modern a CPU internally is, the less it breaks instructions and the more micro-ops it has, which is the complete opposite of your precious RISC. So in actuality it's the RISC-ISA CPUs that become CISC internally, to keep up with performance (e.g. fusing multiple instructions into a complex op).

                The complete opposite is happening yet you parrot the same bullshit that started from the P4 era. Nice.

                See this (pdf) for actual micro-ops per instruction: http://www.agner.org/optimize/instruction_tables.pdf. It is safe to assume anything that decodes to 1 micro-op is a complex instruction and "native" to the CPU. See how many apart from basic arithmetic/logic instructions have 1 micro-op and then keep believing it's "RISC".
                Last edited by Weasel; 09 July 2018, 12:43 PM.

                Comment


                • #38
                  Originally posted by L_A_G View Post
                  As for Weasel's ARM-craptalk, pretty much everything that isn't academic, for very simple embedded use or just plain old uses a so-called "post-RISC"-architecture and people stopped stopped talking about "standard" RISC architectures over a decade ago. Even x86 went quite RISC-like, but has since just been adding loads of CISC-esque hardware and features. There just isn't much of a point in talking about how "un-RISClike" architectures are today when trying to be too RISC-like is just a detriment to your design unless you want to do a really low power system (you know, like being able to be powered off the heat radiated by a cup of coffee).
                  You know, the moment people "redefined" RISC is the moment they lost the RISC-superiority argument and CISC won. Nothing more needed.

                  Comment


                  • #39
                    Originally posted by wizard69 View Post
                    Actually it is one and the same. RISC is simply an instruction set of a fixed word length, the minute a processor has to load multiple words of memory to complete an instruction it is no longer running as a RISC machine. Given that a. RISC processors instructions can be as advance as the designer wants to mske them. For example FMA shows up in many. RISC implementations.
                    Which makes them not fully RISC but relying on "complex" instructions. See: https://en.wikipedia.org/wiki/Instru...x_instructions

                    Any SIMD instruction (vector instruction) is complex, by definition. The fact that RISC CPUs had to borrow them and still wanted to be called RISC by "redefining" it proves they lost the battle already to CISC.

                    Originally posted by wizard69 View Post
                    As for Intel, yes the processors are internally RISC like this has been known for some time. The old i86 instructions literally are decoded into RISC like operations feed to the execution units. Maybe not completely RISC like but then again not many RISC like processors are rigidly RISC.
                    This has not been "known" for a long time, this has been parroted as bullshit for a long time. See previous post.

                    SIMD instructions are CISC. SIMD instructions have 1 micro op on x86 internal circuitry (i.e. are implemented directly in hardware). Thus the internals are CISC. The end.

                    Originally posted by wizard69 View Post
                    As for ARM there is nothing stopping them from going high performance. The problem is most of their cutomers dont want high performance no holds barred. Instead they want max performance per watt. Frankly Apple has shown that ARM performance can be advanced significantly while still executing the ARM instruction set.
                    This is the most fallacious argument in existence. Customers don't want high performance? Then they don't buy that product. But what about other potential customers who want performance? The fact is that they can't do it, not that they don't want to, no matter how much you believe it.

                    Comment


                    • #40
                      Originally posted by starshipeleven View Post
                      The fools also launched a campaign against Chromebooks, and they basically did free ads to Google's stuff, and obviously failed at discouraging Chromebook sales. It's fucking hilarious. What were they even thinking....
                      I guess when your greatest enemy does something like that you know you have something good.

                      Comment

                      Working...
                      X