Announcement

Collapse
No announcement yet.

ARM Aims To Deliver Core i5 Like Performance At Less Than 5 Watts

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by gukin View Post
    When I saw i5 like performance I thought: "wow if they can match the i5 8250u that would be pretty impressive" However they specified the 7300U which is a two core non SMT processor.
    No, it has 2 cores / 4 threads:


    Comment


    • #32
      Originally posted by starshipeleven View Post
      That's cool but not game-changing imho.
      Since they currently sell (license) implementations of their GPU driver to vendors (for example OGL X11 driver is sold separately from GLES Android driver) open sourcing it would be game-changing enough, in my opinion.

      Comment


      • #33
        Originally posted by johnc View Post
        Yeah, yeah... They've been saying this for years and have been getting nowhere close. Not to mention that nobody wants Windows ARM laptops and ARM can't see beyond Windows laptops.
        They have NOT been "saying this for years".
        What they have been saying for years is that they hope to be competitive in servers starting in 2020. Repeating that every quarter (it's a standard part of their quarterly investor shpiel) just means they're confident in the date, not that they keep delaying...

        This is the first time they've indicated, in any sort of serious way, that they think it's worth competing in desktops.
        Hell, if you actually READ THE DAMN PRESS RELEASE it is there on the first line
        "Arm unveils its first-ever public CPU forward-looking roadmap and performance numbers"

        So why are they doing this?
        As usual Americans, especially WIntel users, think the world revolves around them, and that this announcement is relevant to them. This has NOTHING TO DO with Wintel. If MS wants to keep pushing Windows on ARM, ARM won't stop them, but they don't care. This is about enabling computing for everyone who is NOT on WIntel; it's about enabling ruggedized cheap (really cheap) laptops for India, and rural China, and Africa. The comparison to Intel performance is to allow the Chinese vendors (and anyone else who wants to plan non-ISA-dependent boxes for the 2019..2021 timeframe) to calibrate their expectations and plan accordingly.
        Do you sell a NAS? A Microtik style box? An Asterix box? Maybe it's time to reconsider either using ARM64 (start experimenting with the SW) or thinking what you could do if you had much more CPU.

        As for why they are doing this, that's as obvious as why they are announcing it. Apple has shown what is possible if you're willing to pay a slightly higher area and energy cost. ARM has always concentrated on absolutely minimalist area and energy requirements, and that served them well. But there is clearly a huge pool of potential customers (ie all those flagship phone vendors) who would be quite willing to pay a lot more for a core that was a lot closer to Apple. So time to augment the business plan. And once you have a core that kicks ass, why limit yourself to selling it only in phones?

        Comment


        • #34
          Originally posted by johnc View Post
          Yeah, yeah... They've been saying this for years and have been getting nowhere close. Not to mention that nobody wants Windows ARM laptops and ARM can't see beyond Windows laptops.
          Years? Your memory seems to be a bit inaccurate as they announced their first "i5 performance at a lower wattage" part, the Cortex A76, only at the end of May this year meaning that it's only been about 2,5 months since they started talking about getting into the laptop performance envelope.

          We haven't seen any devices actually using it and the process it's supposed to be manufactured on, Samsung's 7nm process, has yet to reach volume production. As such any sane person will at least wait until actual devices using these "laptop level" ARM parts start showing (which realistically should be at some point next year).
          Last edited by L_A_G; 21 August 2018, 09:15 AM.

          Comment


          • #35
            Originally posted by Wilfred View Post
            arm64 also has the speculative execution vulnerabilities, so ARM has to do that too.
            Hardly. The branch-prediction in ARM is both newer and cleaner so it's not incredibly difficult to fix. Moreover, ARM can just break ISA backwards compatibility while Intel can't.

            Comment


            • #36
              Originally posted by coder View Post
              Intel's TDP includes their GPU. AVX2 also consumes quite a bit of power, and I don't know how that factors into Intel's TDP estimates.
              Nobody forces you to use it. Transistors that are not used do not use power.

              Comment


              • #37
                Originally posted by c117152 View Post
                Moreover, ARM can just break ISA backwards compatibility while Intel can't.
                Citation needed.

                Not gonna bother, but some of you guys really have no idea what you're talking about, and think repeating your opinions will somehow turn them into facts.

                https://www.logicalfallacies.org/arg...epetition.html

                (I also laughed hard at the branch prediction being "newer and cleaner", it's just too funny, because you speak of crap you clearly have zero clue of, as the branch predictor is not even EXPOSED via the ISA on x86 at least, so each micro-architecture (e.g. Skylake) can have a different branch predictor, totally new or not)
                Last edited by Weasel; 17 August 2018, 08:00 AM.

                Comment


                • #38
                  Originally posted by Weasel View Post
                  Nobody forces you to use it. Transistors that are not used do not use power.
                  That made my day.

                  Comment


                  • #39
                    Originally posted by coder View Post
                    Intel's TDP includes their GPU. AVX2 also consumes quite a bit of power...
                    Only if you use it, really. And if you do/can use AVX2, your W/FLOP is going to be better than if you didn't, most of the time (and your throughput is going to be hard to beat, which is part of why I'm skeptical).

                    Comment


                    • #40
                      Originally posted by Weasel View Post
                      Citation needed.

                      Not gonna bother, but some of you guys really have no idea what you're talking about, and think repeating your opinions will somehow turn them into facts.
                      Cite what exactly, the future? ARM9 broke binary compatibility with ARM8 which broke with ARM7. Ignoring the cortex and thumb variations, that's 3 major releases since 93.

                      Originally posted by Weasel View Post
                      (I also laughed hard at the branch prediction being "newer and cleaner", it's just too funny, because you speak of crap you clearly have zero clue of, as the branch predictor is not even EXPOSED via the ISA on x86 at least, so each micro-architecture (e.g. Skylake) can have a different branch predictor, totally new or not)
                      Pipeline width is tied with the kind of predictor you can use which is determined by the instruction width when backwards compatibility is a concern. Intel can't just switch to a whole new microarch of their choosing. Decoder or not, width needs to be about the same or less and cache coherence (memory hierarchy for the non VLIW crowd) needs to grid to the C model. That limits their choices of predictors (and consequently, L$ layout) from dozens to 2 or 3 variations of the same one and a few internal details that may or may not produce the kind of nose demons we're seeing in the current generation of speculative attacks.

                      TL;DR, worse case scenario ARM can design whole new cores with whole new ISA taking advantage of some exotic prediction method no one bothered commercializing since out-of-order was good enough until meltdown. Intel, on the other hand, is very limited in what they can do. And having a decoder only means they get to tweak some things without coming out with huge performance losses. It doesn't mean they get to just use it some magic 95% efficient emulator for everything.

                      Comment

                      Working...
                      X