Announcement

Collapse
No announcement yet.

ARM Cortex-A15 vs. NVIDIA Tegra 3 vs. Intel x86

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    How is the install trivial? Did anybody read the instructions? It is a lot of steps, just to get something with broken sound, no hardware acceleration, and a broken touchpad. And, where, in order to persist it, you need to type a tricky command.

    Oh well, the CPU performance per watt seems stellar. It would be nice to plot that! (something like a geometric mean of performance in all tests/ normalized to processor's TDP)

    Comment


    • #17
      Originally posted by nej_simon View Post
      I wonder how the Apple A6 would do since it's after all the most powerfull ARM soc today.
      This info is being spread all over teh Internets but I'm not sure what the original source is. It could have been that initial AnandTech flub at the iPhone 5 launch.

      But the A6 is about equivalent to what Qualcomm was shipping for a year with their Krait-based Snapdragon S4s.

      And there's no way an A6 is in the same league as a Cortex-A15 anything.

      Comment


      • #18
        Originally posted by Veerappan View Post
        I think it's mostly that no one has released a high-power ARM chip yet. Most modern x86 processors decode the ISA (x86/x86-64/etc) into something that the processor can handle more efficiently. It used to at least be that you'd decode x86 into a RISC-like instruction set, and then have the rest of the processor use that decoded instruction stream. I believe that's still the case for most CPUs on the market. With enough work on a new instruction decoder, it could be possible for the high-performance guts of an X86 CPU to be re-used in an ARM product. Beyond the raw instruction set, you'd also have to take care of floating point, and neon instruction decoding (probably translate to some version of SSE).



        I don't know if I'd hold out for a 125W ARM server chip, but I could see a high performance 30-70W chip being released in the next few years.

        Still the same crap i see. The difference is very simple. "Complex" and "Reduced" are not for instructions but for instruction sets. So its Complex or Reduced relations between instructions. So on RISC one instruction is a little the other or continuation of another or other relation. On CISC instructions don't have the same root not even when they divide to Micro-Ops or compress to Macro-Ops, that's only possible with recompiling. The thing is that on RISC the instructions run instantly and all possibilities together in a single vector. On CISC they need extended Microcode, many units, and there is not possibility for JIT-streaming, like graphics because the CISC compiler cannot use all the units efficiently, wile with RISC any data can co-execute with any data. At the end the only difference is that for the same generation and possibilities, RISC needs 1/10 the transistors to do the same amount of work and the 1/10 Watts.

        Comment


        • #19
          Originally posted by johnc View Post
          This info is being spread all over teh Internets but I'm not sure what the original source is. It could have been that initial AnandTech flub at the iPhone 5 launch.

          But the A6 is about equivalent to what Qualcomm was shipping for a year with their Krait-based Snapdragon S4s.

          And there's no way an A6 is in the same league as a Cortex-A15 anything.
          Well I can't see any sources in your post either. Or any explanation why the anantech benchmarks would be unreliable. More benchmarks have been made since the iPhone 5's launch btw, for ex. the A6X used in the new iPad quite easily beats both the A15 and S4 in GPU benchmarks. But perhaps these are a flub too?

          Comment


          • #20
            Originally posted by nej_simon View Post
            Well I can't see any sources in your post either. Or any explanation why the anantech benchmarks would be unreliable. More benchmarks have been made since the iPhone 5's launch btw, for ex. the A6X used in the new iPad quite easily beats both the A15 and S4 in GPU benchmarks. But perhaps these are a flub too?
            Those are GPU benchmarks of the PowerVR chip.

            Comment


            • #21
              Originally posted by johnc View Post
              Those are GPU benchmarks of the PowerVR chip.
              So? That doesn't change the fact that the A6(x), which incorporates a PowerVR GPU, is faster than the competing ARM SoCs at GPU performance. And the CPU benchmarks aren't exactly bad either, it either beats the competing SoCs or is close. The combination of this is why I would say the A6(x) is the fastest SoC today.
              Last edited by nej_simon; 11-29-2012, 05:55 PM.

              Comment


              • #22
                Here you go: http://www.anandtech.com/show/6440/g...xus-4-review/3
                A6 s able to best exynos 5 in a few, but not many benchmarks. Obviouly well never have a very comparable data because thise chips will never run the same OS.

                Comment


                • #23
                  The GPU benchmarks aren't very relevant, because a manufacturer could "easily" decide to put a powervr instead of a mali on his soc and have equal or slightly superior benchs in the GPU category.
                  Thus A6 is still faster in graphics than the E5, but the A15 is way faster than the A9.

                  Comment


                  • #24
                    Ouya should have used a A15 chip instead of Tegra 3.
                    Oh well, by the time it comes out its performance won't be that stellar.

                    Comment


                    • #25
                      Originally posted by Figueiredo View Post
                      Here you go: http://www.anandtech.com/show/6440/g...xus-4-review/3
                      A6 s able to best exynos 5 in a few, but not many benchmarks. Obviouly well never have a very comparable data because thise chips will never run the same OS.
                      The A6 CPU core is simply weaker than A15, and Anand says it too at the beginning of the article I believe. Combine that with the fact that A15 is running at 1.7 Ghz, while A6 is at 1.3 Ghz in iPhone 5, and probably 1.5 Ghz in iPad (unless they kept it the same), and there's no way A6's CPU can beat A15. As someone else has said here, it should have around the same performance as Qualcomm's Krait, although perhaps slightly more efficient (something Anand says).

                      Why aren't you seeing this reflected in Nexus 10 benchmarks? Because they are done with Chrome for Android, which in terms of performance is not competitive with the mobile Safari right now, and that's all Google's fault for leaving it behind desktop Chrome a 5 full 5 versions.

                      The exact same Exynos 5 scores under 700 ms in Chrome OS/Chrome 23, and about double in both V8 and Octane browser tests, compared to iPhone 5.

                      http://www.androidauthority.com/exyn...hmarks-125134/

                      Again, the A6 CPU is not even close in terms of performance with A15/Exynos 5. The GPU is indeed about 50% faster in games, so I agree that. But what do you use more in devices? The CPU or the GPU? Obviously the CPU, for all apps. So by that account, Exynos 5 is the better chip. You can actually run full OS's on it as you can see.

                      Comment


                      • #26
                        Originally posted by bachinchi View Post
                        Ouya should have used a A15 chip instead of Tegra 3.
                        Oh well, by the time it comes out its performance won't be that stellar.
                        I agree, they should've waited for Tegra 4.

                        Comment


                        • #27
                          The A6 CPU core is simply weaker than A15, and Anand says it too at the beginning of the article I believe. Combine that with the fact that A15 is running at 1.7 Ghz, while A6 is at 1.3 Ghz in iPhone 5, and probably 1.5 Ghz in iPad (unless they kept it the same), and there's no way A6's CPU can beat A15. As someone else has said here, it should have around the same performance as Qualcomm's Krait, although perhaps slightly more efficient (something Anand says).

                          Why aren't you seeing this reflected in Nexus 10 benchmarks? Because they are done with Chrome for Android, which in terms of performance is not competitive with the mobile Safari right now, and that's all Google's fault for leaving it behind desktop Chrome a 5 full 5 versions.

                          The exact same Exynos 5 scores under 700 ms in Chrome OS/Chrome 23, and about double in both V8 and Octane browser tests, compared to iPhone 5.

                          http://www.androidauthority.com/exyn...hmarks-125134/

                          Again, the A6 CPU is not even close in terms of performance with A15/Exynos 5. The GPU is indeed about 50% faster in games, so I agree that. But what do you use more in devices? The CPU or the GPU? Obviously the CPU, for all apps. So by that account, Exynos 5 is the better chip. You can actually run full OS's on it as you can see.

                          Comment


                          • #28
                            While the test would be more fair if we had a Quad Core Cortex A15, Since the Intel would have had 4 thread to use. Sadly we dont have such option on the market at the moment.

                            I wonder if the x86 test were using 64bit, i.e giving additional advantage to x86 or are they done in 32bit too? If it was x64 i would be VERY impressed.

                            Comment


                            • #29
                              Originally posted by ksec View Post
                              I wonder if the x86 test were using 64bit, i.e giving additional advantage to x86 or are they done in 32bit too? If it was x64 i would be VERY impressed.
                              It's all mentioned in the PTS tables..... All the 64-bit capable x86 hardware was using x86_64 images.
                              Michael Larabel
                              http://www.michaellarabel.com/

                              Comment


                              • #30
                                I used phoronix-test-suit to compare your benchmarks with mine

                                Comment

                                Working...
                                X