Announcement

Collapse
No announcement yet.

ARM Cortex-A15 vs. NVIDIA Tegra 3 vs. Intel x86

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by nej_simon View Post
    This is really interesting. I wonder how the Apple A6 would do since it's after all the most powerfull ARM soc today. But I guess that would be difficult to test unless PTS is ported to iOS.

    If I remember correctly the Exynos 5 has a TDP of 4 W compared to the Atom D525's 13 Watt, and yet the Exynos 5 is mostly faster. And again, it's not even the fastest ARM soc available. This is after Intel has spent several years developing the atom. I can't help to think that perhaps the x86 is too complex to ever be really power efficient.

    It's only 2.5Wp.

    Comment


    • #12
      Originally posted by Krysto View Post
      I don't think we'll really see a dual core 2.5 Ghz A15 in the market, especially in phones. It's not really possible to do that at 28nm, and by the time you go with 20nm, it's already time to switch to Cortex A57, so you're better off using that. I think we'll even see dual core 3 Ghz Cortex A57 in 2015 or so, at 14nm. 3 Ghz ARM processors should be possible at 14nm, while maintaining the same low power level.
      28nm LP extra, can give a 2.5Ghz dual core ARM A15 @2.5ghz. And even a quad core (2*A15 + 2*A7 @2.5Ghz), in a smartphone.

      Comment


      • #13
        Originally posted by Krysto View Post
        I don't think we'll really see a dual core 2.5 Ghz A15 in the market, especially in phones. It's not really possible to do that at 28nm, and by the time you go with 20nm, it's already time to switch to Cortex A57, so you're better off using that. I think we'll even see dual core 3 Ghz Cortex A57 in 2015 or so, at 14nm. 3 Ghz ARM processors should be possible at 14nm, while maintaining the same low power level.
        Actually Nvidia is planning 3GHz desktop parts for 20nm with Project Denver. https://en.wikipedia.org/wiki/Project_Denver I wouldn't be surprised if more ARM partners tried their hand at desktop/server AA64 parts(AMD has already announced they'll be working on ARM Opteron parts).

        Comment


        • #14
          I don't understand cpu archtecture enought, maybe you guys will help clarify. How much o the CPU modern CPU archtecture is linked to the ISA? I mean, would it take too much work to say port Piledriver/Steamroller/Excavator to accept ARM ISA? Would the same apply to Haswell/Broadwell/etc? ARM is constantly bashed by being too low performance, but does the performance has anything to do with the ISA or is it simply because nobody has released a high power ARM chip yet?

          The way I see it, its not about technology, it's abou business model. ARM business model allows for the creation of an infinite number of SoCs with different building blocks that all run the same Apps. x86 does not. If you are writing x86 Apps you are bound to run on the limited range of chips sold by Intel and AMD, so in the long run, ARM is a no-brainer simply due to the sheer variety and innovation possible in the ARM space, as long as they keep fragmentation in check.

          In view of the above, I'm very curious to know if we can expect something like a 125W Steamroller ARM chip in the near future. With everybody but intel on the ARM bandwagon, I can't see Intel's big custumers being very happy having a single supplier...

          Comment


          • #15
            Originally posted by Figueiredo View Post
            I don't understand cpu archtecture enought, maybe you guys will help clarify. How much o the CPU modern CPU archtecture is linked to the ISA? I mean, would it take too much work to say port Piledriver/Steamroller/Excavator to accept ARM ISA? Would the same apply to Haswell/Broadwell/etc? ARM is constantly bashed by being too low performance, but does the performance has anything to do with the ISA or is it simply because nobody has released a high power ARM chip yet?
            I think it's mostly that no one has released a high-power ARM chip yet. Most modern x86 processors decode the ISA (x86/x86-64/etc) into something that the processor can handle more efficiently. It used to at least be that you'd decode x86 into a RISC-like instruction set, and then have the rest of the processor use that decoded instruction stream. I believe that's still the case for most CPUs on the market. With enough work on a new instruction decoder, it could be possible for the high-performance guts of an X86 CPU to be re-used in an ARM product. Beyond the raw instruction set, you'd also have to take care of floating point, and neon instruction decoding (probably translate to some version of SSE).

            Originally posted by Figueiredo View Post
            In view of the above, I'm very curious to know if we can expect something like a 125W Steamroller ARM chip in the near future. With everybody but intel on the ARM bandwagon, I can't see Intel's big custumers being very happy having a single supplier...
            I don't know if I'd hold out for a 125W ARM server chip, but I could see a high performance 30-70W chip being released in the next few years.

            Comment


            • #16
              How is the install trivial? Did anybody read the instructions? It is a lot of steps, just to get something with broken sound, no hardware acceleration, and a broken touchpad. And, where, in order to persist it, you need to type a tricky command.

              Oh well, the CPU performance per watt seems stellar. It would be nice to plot that! (something like a geometric mean of performance in all tests/ normalized to processor's TDP)

              Comment


              • #17
                Originally posted by nej_simon View Post
                I wonder how the Apple A6 would do since it's after all the most powerfull ARM soc today.
                This info is being spread all over teh Internets but I'm not sure what the original source is. It could have been that initial AnandTech flub at the iPhone 5 launch.

                But the A6 is about equivalent to what Qualcomm was shipping for a year with their Krait-based Snapdragon S4s.

                And there's no way an A6 is in the same league as a Cortex-A15 anything.

                Comment


                • #18
                  Originally posted by Veerappan View Post
                  I think it's mostly that no one has released a high-power ARM chip yet. Most modern x86 processors decode the ISA (x86/x86-64/etc) into something that the processor can handle more efficiently. It used to at least be that you'd decode x86 into a RISC-like instruction set, and then have the rest of the processor use that decoded instruction stream. I believe that's still the case for most CPUs on the market. With enough work on a new instruction decoder, it could be possible for the high-performance guts of an X86 CPU to be re-used in an ARM product. Beyond the raw instruction set, you'd also have to take care of floating point, and neon instruction decoding (probably translate to some version of SSE).



                  I don't know if I'd hold out for a 125W ARM server chip, but I could see a high performance 30-70W chip being released in the next few years.

                  Still the same crap i see. The difference is very simple. "Complex" and "Reduced" are not for instructions but for instruction sets. So its Complex or Reduced relations between instructions. So on RISC one instruction is a little the other or continuation of another or other relation. On CISC instructions don't have the same root not even when they divide to Micro-Ops or compress to Macro-Ops, that's only possible with recompiling. The thing is that on RISC the instructions run instantly and all possibilities together in a single vector. On CISC they need extended Microcode, many units, and there is not possibility for JIT-streaming, like graphics because the CISC compiler cannot use all the units efficiently, wile with RISC any data can co-execute with any data. At the end the only difference is that for the same generation and possibilities, RISC needs 1/10 the transistors to do the same amount of work and the 1/10 Watts.

                  Comment


                  • #19
                    Originally posted by johnc View Post
                    This info is being spread all over teh Internets but I'm not sure what the original source is. It could have been that initial AnandTech flub at the iPhone 5 launch.

                    But the A6 is about equivalent to what Qualcomm was shipping for a year with their Krait-based Snapdragon S4s.

                    And there's no way an A6 is in the same league as a Cortex-A15 anything.
                    Well I can't see any sources in your post either. Or any explanation why the anantech benchmarks would be unreliable. More benchmarks have been made since the iPhone 5's launch btw, for ex. the A6X used in the new iPad quite easily beats both the A15 and S4 in GPU benchmarks. But perhaps these are a flub too?

                    Comment


                    • #20
                      Originally posted by nej_simon View Post
                      Well I can't see any sources in your post either. Or any explanation why the anantech benchmarks would be unreliable. More benchmarks have been made since the iPhone 5's launch btw, for ex. the A6X used in the new iPad quite easily beats both the A15 and S4 in GPU benchmarks. But perhaps these are a flub too?
                      Those are GPU benchmarks of the PowerVR chip.

                      Comment

                      Working...
                      X