ARM Aims To Deliver Core i5 Like Performance At Less Than 5 Watts

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • ldesnogu
    replied
    Originally posted by microcode View Post
    Only if you use it, really. And if you do/can use AVX2, your W/FLOP is going to be better than if you didn't, most of the time (and your throughput is going to be hard to beat, which is part of why I'm skeptical).
    Since 65nm leakage has become a major issue. So the simple existence of transistors increases power consumption even when they are not used. Even clock gating is not enough, and power gating is required. And the problem with power gating is that waking up a block is not immediate and creates power spikes.

    I think now Intel power gates some of its blocks (AVX-512 for sure, perhaps AVX2 too), but they were late to that game compared to ARM; I guess they have mostly caught up now given how low power they can get with some of their (binned) CPUs.

    Leave a comment:


  • brauliobo
    replied
    There are almost NO BENCHARKS OF ARM vs INTEL

    Leave a comment:


  • Weasel
    replied
    Originally posted by c117152 View Post
    Cite what exactly, the future? ARM9 broke binary compatibility with ARM8 which broke with ARM7. Ignoring the cortex and thumb variations, that's 3 major releases since 93.
    Because ARM was a complete non-factor back then.

    I'm talking about ARM-on-the-desktop, which is the topic. You literally said "ARM can just break ISA backwards compatibility" and I assumed you speak about this subject. Maybe misunderstanding but I'd like to see how far they end up if they do that on the desktop.

    Originally posted by c117152 View Post
    Pipeline width is tied with the kind of predictor you can use which is determined by the instruction width when backwards compatibility is a concern. Intel can't just switch to a whole new microarch of their choosing. Decoder or not, width needs to be about the same or less and cache coherence (memory hierarchy for the non VLIW crowd) needs to grid to the C model. That limits their choices of predictors (and consequently, L$ layout) from dozens to 2 or 3 variations of the same one and a few internal details that may or may not produce the kind of nose demons we're seeing in the current generation of speculative attacks.
    ...what?!?

    Did you just put some random buzzwords?

    Your first statement is already wrong anyway. Pipeline width has nothing to do with instruction width, like, at all. (also, what instruction width? it's variable on x86, so which one do you refer to)

    Leave a comment:


  • c117152
    replied
    Originally posted by Weasel View Post
    Citation needed.

    Not gonna bother, but some of you guys really have no idea what you're talking about, and think repeating your opinions will somehow turn them into facts.
    Cite what exactly, the future? ARM9 broke binary compatibility with ARM8 which broke with ARM7. Ignoring the cortex and thumb variations, that's 3 major releases since 93.

    Originally posted by Weasel View Post
    (I also laughed hard at the branch prediction being "newer and cleaner", it's just too funny, because you speak of crap you clearly have zero clue of, as the branch predictor is not even EXPOSED via the ISA on x86 at least, so each micro-architecture (e.g. Skylake) can have a different branch predictor, totally new or not)
    Pipeline width is tied with the kind of predictor you can use which is determined by the instruction width when backwards compatibility is a concern. Intel can't just switch to a whole new microarch of their choosing. Decoder or not, width needs to be about the same or less and cache coherence (memory hierarchy for the non VLIW crowd) needs to grid to the C model. That limits their choices of predictors (and consequently, L$ layout) from dozens to 2 or 3 variations of the same one and a few internal details that may or may not produce the kind of nose demons we're seeing in the current generation of speculative attacks.

    TL;DR, worse case scenario ARM can design whole new cores with whole new ISA taking advantage of some exotic prediction method no one bothered commercializing since out-of-order was good enough until meltdown. Intel, on the other hand, is very limited in what they can do. And having a decoder only means they get to tweak some things without coming out with huge performance losses. It doesn't mean they get to just use it some magic 95% efficient emulator for everything.

    Leave a comment:


  • microcode
    replied
    Originally posted by coder View Post
    Intel's TDP includes their GPU. AVX2 also consumes quite a bit of power...
    Only if you use it, really. And if you do/can use AVX2, your W/FLOP is going to be better than if you didn't, most of the time (and your throughput is going to be hard to beat, which is part of why I'm skeptical).

    Leave a comment:


  • ldesnogu
    replied
    Originally posted by Weasel View Post
    Nobody forces you to use it. Transistors that are not used do not use power.
    That made my day.

    Leave a comment:


  • Weasel
    replied
    Originally posted by c117152 View Post
    Moreover, ARM can just break ISA backwards compatibility while Intel can't.
    Citation needed.

    Not gonna bother, but some of you guys really have no idea what you're talking about, and think repeating your opinions will somehow turn them into facts.

    https://www.logicalfallacies.org/arg...epetition.html

    (I also laughed hard at the branch prediction being "newer and cleaner", it's just too funny, because you speak of crap you clearly have zero clue of, as the branch predictor is not even EXPOSED via the ISA on x86 at least, so each micro-architecture (e.g. Skylake) can have a different branch predictor, totally new or not)
    Last edited by Weasel; 17 August 2018, 08:00 AM.

    Leave a comment:


  • Weasel
    replied
    Originally posted by coder View Post
    Intel's TDP includes their GPU. AVX2 also consumes quite a bit of power, and I don't know how that factors into Intel's TDP estimates.
    Nobody forces you to use it. Transistors that are not used do not use power.

    Leave a comment:


  • c117152
    replied
    Originally posted by Wilfred View Post
    arm64 also has the speculative execution vulnerabilities, so ARM has to do that too.
    Hardly. The branch-prediction in ARM is both newer and cleaner so it's not incredibly difficult to fix. Moreover, ARM can just break ISA backwards compatibility while Intel can't.

    Leave a comment:


  • L_A_G
    replied
    Originally posted by johnc View Post
    Yeah, yeah... They've been saying this for years and have been getting nowhere close. Not to mention that nobody wants Windows ARM laptops and ARM can't see beyond Windows laptops.
    Years? Your memory seems to be a bit inaccurate as they announced their first "i5 performance at a lower wattage" part, the Cortex A76, only at the end of May this year meaning that it's only been about 2,5 months since they started talking about getting into the laptop performance envelope.

    We haven't seen any devices actually using it and the process it's supposed to be manufactured on, Samsung's 7nm process, has yet to reach volume production. As such any sane person will at least wait until actual devices using these "laptop level" ARM parts start showing (which realistically should be at some point next year).
    Last edited by L_A_G; 21 August 2018, 09:15 AM.

    Leave a comment:

Working...
X