Announcement

Collapse
No announcement yet.

How A Raspberry Pi 4 Performs Against Intel's Latest Celeron, Pentium CPUs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Originally posted by Raka555 View Post
    The other thing that is holding the RPIs back, is the bus. They use USB vs PCIe on AMD/Intel.
    No doubt the price will skyrocket if they give the RPI an PCIe bus, but so will some of the performance.
    Technically speaking, the raspi has a native PCIe controller already, but it is used to run the USB controller.
    Someone hacked it and exposed the PCIe lanes https://www.tomshardware.com/news/ra...-a-step-closer

    And Raspi foundation has promised their Raspi 4 compute module (the one that looks like a laptop RAM module) will expose the PCIe lanes so it will be usable https://www.electronics-lab.com/rasp...released-2021/

    Comment


    • #72
      Originally posted by starshipeleven View Post
      ARM instruction set was born 35 fucking years ago in a completely different world, how can you make these bold claims about what it was "intended to be" based on PR releases of current ARM CPU designs (ARM CPU designs are specific implementations of ARM instruction set) and extrapolate that ARM instruction set as a whole was ever intended to be only for low power?
      Uhhh.... you do realize that only in the past 7 or so years is when ARM actually starting having noteworthy performance, right? Also, you're basically just covering your ears yelling "I'm not listening!" right now.
      Current ARM CPU designs have been tailored towards power efficiency because that's the market niche they managed to get into, and what anyone will want to buy from ARM Holdings company, but this does not tell us anything about the architecture's ability or original goals.
      When something is explicitly tailored to be made one way, that means it wasn't focused on being made any other way. Just about everyone who invests/licenses ARM knows that efficiency is a top priority. I never said that the architecture can't be pushed to clock higher, but the reason nobody does is because it's bad at it, something you for whatever reason seem to have a hard time believing.
      Which is why for example Amazon's ARM CPUs have similar performance to high end server AMD and Intel CPUs on single-core tests https://www.anandtech.com/show/15578...ntel-and-amd/5 while having a TDP that is 50-100w lower and the reviewer said "Amazon was able to deliver on its promise of 40% better performance per dollar, and it’s a massive shakeup for the AWS and EC2 ecosystem." Intel would throw fucking infants into a big fire to get a 40% better "performance per dollar" ratio on their high end CPUs.
      What does this have to do with the discussion? Amazon threw more instructions and cores at the problem and succeeded, not more Hz.
      Meanwhile, Apple has decided to drop x86 and migrate to their ARM CPUs, after years of benchmarks that showed how their ARM CPUs were more or less on par with Intel's 15w laptop parts.
      Yes, and with all the money they spent on additional instructions and fine-tune optimizations, it totally makes sense.
      What about NVIDIA? Oh they just want to buy ARM whole https://www.zdnet.com/article/nvidia...ks-to-buy-arm/
      I'm aware. What does that have to do with this?
      Yeah, they are totally not pushing or betting on ARM CPUs. It's all an hallucination.
      You do realize these companies are not ARM and do not represent ARM's motives to their developments, right? By your crappy logic, that's like saying Volvo knows better what Ford's engines are meant for, simply because they [used to] use Ford engines in their cars.
      Just because Amazon, Apple, Nvidia, etc use ARM's design and expand upon it, that doesn't mean they know what the architecture was built for. That doesn't mean they can't improve it (because they do) but that's not what ARM themselves represent.
      ARM Holdings designed ARM CPU cores, and sells the license to use these CPU designs to embedded SoC manufacturers. They also design ARM cores for microcontrollers, and sell the license to use these to microcontroller manufacturers. Qualcomm is just buying ARM CPU designs and slapping them into their products. A Cortex-A78 is the same thing, be it in a Qualcomm, Huawei or Broadcomm or NVIDIA SoC.
      Yup, and the sky is blue.
      But ARM Holdings also sold much more expensive ARM instruction set licenses, that allows anyone to make their own CPU design using the ARM instruction set. Similar to what Intel did with AMD and VIA. AMD and VIA CPUs are completely different from Intel CPUs even if they use the same x86 instruction set. Different implementations. Limits of one implementation are not limits of another, which is why for example VIA CPUs were so good at low-power back in the day when Intel had only blast furnaces.
      Yup. Doesn't change my point.
      Yeah, sources that clearly explain how you can't tell the difference between a CPU implementation and a CPU instruction set.
      All you're doing is proving that you can't tell the difference between what ARM intends and what their licensees intend.

      Comment


      • #73
        Originally posted by TheOne View Post
        What I would like yo see is an Odroid N2+ comparison, that should be more interesting since it is a more powerful sbc than the RPI4
        https://openbenchmarking.org/result/...NE66&obr_sro=y
        Did a run on my N2(nearly broken. usb hub dead+ sd card does not work anymore) but the results should be fine.


        Would be interesting to look if the big wins of the intel chips are cause most of the programs have at least sse2, compared to not much optimizations for ARM CPUs with NEON yet.

        For FLAC does it look like there is aarch64/neon support in work but not yet merged https://github.com/xiph/flac/pull/183

        Comment


        • #74
          Originally posted by Raka555 View Post
          If you compile a 32bit disto to use the Cortex-A53 as minimum, then it will be just as fast or even faster that the 64bit OS.
          no, it won't. the fact that 32-bit ARM has half as many registers as 64-bit will still hold it back quite a bit in a lot of workloads.

          Comment


          • #75
            Originally posted by hotaru View Post

            no, it won't. the fact that 32-bit ARM has half as many registers as 64-bit will still hold it back quite a bit in a lot of workloads.
            And the fact that 64bit pointers "waste" 1/2 the cache they are loaded in, even things out. So more or less the same performance, dependent on workload.

            Comment


            • #76
              Originally posted by carewolf View Post

              That is just a marketing lie. They still had the reference designs and components. They have built up their new overall architecture from scratch, but they have all the pre-built components that they bought from ARM. So a new architecture "from scratch", but not a new CPU design from scratch. Or about as new as a new architecture from AMD or Intel.
              ARM Reference designs are a bit like a Cessna, when Apple designed a corporate jet. Yeah, they both fly ... one is far more powerful than the other.

              Comment


              • #77
                Originally posted by Slartifartblast View Post

                Let's be realistic here, we are paying toy prices. It's built for a price and not for sheer performance. You want the fastest ARM then you are more than welcome to pay Apple prices and good luck breaking out of their garden.
                Yes, you are right about that.

                Comment


                • #78
                  Originally posted by ldesnogu View Post
                  A76 and up have changed the direction ARM took. They now seriously target performance... at last! And this explains why companies such as Samsung and Qualcomm decided to switch to ARM designs rather making their own ARM CPU. But yeah Apple is still miles ahead (at least 1/1.5 year in advance).
                  Hoping that you're right (honestly!). Any benchmarks though?

                  Comment


                  • #79
                    Originally posted by wizard69 View Post

                    If Apples processor is as good as I think it can be it will force changes in the industry. If nothing else that is something good that Apple is doing. By the way Apples move here isn't really about ARM, even though I suspect they will have industry leading performance. Rather I see special function units being the big performance driver in their chips, with Neural Engine getting a big boost in the next round of chips. Once the hardware is in place I expect a huge move towards AI/ML techniques in their software.

                    So what many will be seeing as great performance in Apples new machines will not always be because of Apples ARM processors.
                    The problems with special function units:

                    1. They're more-or-less set in stone - once you introduce them, you need (yes, you can extend them, but generally you need to keep your new instructions and maintain compatibility)
                    2. It's really difficult to keep them busy. A compiler won't be able to figure out that your code can be mapped on a neural unit, so most often you need to either code stuff by hand (assembly, intrinsics) or rely on a library that makes use of them.
                    3. The gains are limited by Amdahl's law.

                    All these arguments where made in the original CISC vs RISC papers, FWIW.

                    Comment


                    • #80
                      Originally posted by Raka555 View Post

                      And the fact that 64bit pointers "waste" 1/2 the cache they are loaded in, even things out. So more or less the same performance, dependent on workload.
                      no, that doesn't "even things out". performance is generally much better with 64-bit code. the increase in pointer size doesn't make anywhere near as much difference as doubling the number of registers.

                      Comment

                      Working...
                      X