Announcement

Collapse
No announcement yet.

AMD Announces Ryzen 7000 Series "Zen 4" Desktop CPUs - Linux Benchmarks To Come

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #81
    Originally posted by rabcor View Post

    ARM/RISC-V are doing it at lower power/higher efficiency, that's why.
    Are you accepting new knowledge or you are retired from learning? Apple's cores are cut with two steps down lithography. Just lowering frequency by 20% (and some voltage) from the max possible of a core, you cut the power consumption by 50% from the max possible. When you also use a step down lithography that gives 20% less max frequency then you get that 20% cut in power consumption, making the core to consume 60% less for just 20% perf loss. If you cut two steps down you can have 64% the frequency (80%*80%) for 16% of the consumption (40%*40%). This is a 4x performance per power gain and everyone can do it. The only problem is that you need to order the specific woofers from TSMC and those woofers cannot be used for faster chips as the fast ones cannot be used for slower chips. If you take it in mind you will understand that AMD with chiplet design have another advantage because they can solder a 100F chiplet with an 80F one and a 64F one, consuming 100w + 40w +16w.
    Last edited by artivision; 30 August 2022, 08:37 PM.

    Comment


    • #82
      Originally posted by coder View Post
      Don't forget that Apple's cores were designed, from the ground-up, with efficiency as the number 1 priority.
      Their ARM chips have the same PACMAN flaw as other ARM chips. I don't think Apple did a ground-up as much as you think.
      Also, Apple can afford to trade die size for energy-efficiency more so that AMD.
      I don't know how you can come to that conclusion, or how that even makes sense?

      Comment


      • #83
        Originally posted by rabcor View Post
        too bad we're in the middle of a transition from x86_64 to RISC/ARM architectures;
        ARM is mostly for mobile phones and tablets. ARM is very old, like Panasonic 3DO old. If it hasn't displaced the x86 market, it's safe to say it never will. What about PowerPC, as that was ready to displace x86?
        and my mid-range gaming laptop from 3 years ago is still completely solid for the latest games despite only having an 8th gen intel cpu in it and an rtx 2060.

        There's very little reason for anyone who bought a pc in the last 6 years to upgrade if they bought a halfway decent one (maybe if you do CAD work or a lot of compiling or other such intense work then you should be persuaded, but otherwise in general not), unless it broke; so I'm kinda inclined to wait for the RISC/ARM laptops to become properly mainstream. Apple set a precedent, I'm honestly quite surprised that this transition isn't already in full swing yet (suspicious even).
        Yea, that's PC in general. You shouldn't need to upgrade every year or two.​ Nobody will ever transition towards ARM on desktops and laptops for the same reason why nobody on Android will use x86, because the majority of software is not going to be compatibile. If tomorrow everyone decided to make an ARM build of their software for Windows and Linux then you could be right. As it stands right now a lot games on Mac OSX isn't updating since the transition to 64-bit only and ARM.
        Last edited by Dukenukemx; 30 August 2022, 08:50 PM.

        Comment


        • #84
          Originally posted by artivision View Post

          Are you accepting new knowledge or you are retired from learning? Apple's cores are cut with two steps down lithography. Just lowering frequency by 20% (and some voltage) from the max possible of a core, you cut the power consumption by 50% from the max possible. When you also use a step down lithography that gives 20% less max frequency then you get that 20% cut in power consumption, making the core to consume 60% less for just 20% perf loss. If you cut two steps down you can have 64% the frequency (80%*80%) for 16% of the consumption (40%*40%). This is a 4x performance per power gain and everyone can do it. The only problem is that you need to order the specific woofers from TSMC and those woofers cannot be used for faster chips as the fast ones cannot be used for slower chips. If you take it in mind you will understand that AMD with chiplet design have another advantage because they can solder a 100F chiplet with an 80F one and a 64F one, consuming 100w + 40w +16w.
          If it's that simple, why aren't AMD and Intel doing it? Why does it seem like ARM and RISC-V manufacturer's are the only ones doing this?

          How is it that apple's M1 and M2 series chips have such an insanely much higher performance per watt than all existing competition? If it's really as simple as you say, why isn't there an intel or amd powered laptop that can match the M1 in performance per watt? that cna run at as low power with as high performance I mean I managed to dig this comparison up

          M1-Max-Chart-AT-640x464.jpg

          The M1 generally uses so insanely much less power with performance comparable to that high end gaming laptop, I mean it's using in a lot of cases less than half the power for the same results, that's one of the best mobile chips Intel has ever made, 20% less clock speed would reduce power draw by 50%? (Would it really tho? I don't really believe it's that simple; I've been messing with this stuff myself to get my laptop to draw less power, now in my testing i'm just limiting the frequency, I didn't go into the bios, and I didn't change the voltage, but well...

          (There's maybe a 1w margin of error for these tests I ran; I just had a wattmeter and ran the same kind of cpu stress test for 'em)
          800mhz: 21.8w
          900mhz:22.2w
          1ghz:22.2w
          1.1ghz: 23.2w
          1.2ghz:Missing (most likley ~24w)
          1.3ghz: 25w
          1.5ghz:26w
          1.7ghz:27w
          1.9ghz:29w
          2.1ghz:31w
          2.2ghz(core clock):33w
          4.1ghz(boost): 76w​


          Now if you ask me, it seems fairly linear from the lowest I could set it to the core clock speed it seems that the power draw to clock speed correlation is mostly linear, but if what you said was true I would expect there to be a bigger power gap between the boost and core clock speed, certainly there's a trend where when your clock speed gets higher it starts costing more and more to increase it, but not quite so great as "50%" for the last 20%, in fact very very far from that.

          Even at it's highest performance levels my cpu doesn't match the M1's performance, without turbo boost the power draw is certainly looking pretty decent though (my CB23MT score would be in the range of 6000, maybe even a smidge under it; didn't run it but I checked scores for my cpu) So even when running at 70+ watts my intel i7-8750H will deliver only half the performance the M1 does at 39W;

          Now the i9-11980HK happens to be a huge improvement, twice the performance for only 10w more power draw; but that's still 2x more powerdraw than the M1 for relatively similar performance (A bit better granted, but not greatly)

          I am accepting new knowledge certainly, but I'm not so sure if what oyu're telling me has much merit. The disparity is just too huge between the M1 and it's competition. Surely if it were as simple as you say, intel and AMD would be doing this too to compete with the M1s on power draw, they know how much it matters on the mobile device market.

          Comment


          • #85
            Originally posted by rabcor View Post

            If it's that simple, why aren't AMD and Intel doing it? Why does it seem like ARM and RISC-V manufacturer's are the only ones doing this?
            Why RISC-V? It isn't relevant yet, and probably won't ever be. Not unless someone like Google picks it up and spends real money to progress it.
            How is it that apple's M1 and M2 series chips have such an insanely much higher performance per watt than all existing competition? If it's really as simple as you say, why isn't there an intel or amd powered laptop that can match the M1 in performance per watt? that cna run at as low power with as high performance I mean I managed to dig this comparison up

            M1-Max-Chart-AT-640x464.jpg
            That's because if you run real world applications then the Apple M1/M2 aren't fairly any better than AMD's Rembrandt.


            Comment


            • #86
              Originally posted by birdie View Post

              Less evil AMD starts ripping off their customers a lot more than Intel has ever done as soon as they have a competitive advantage.

              From Sandy Bridge to Comet Lake AMD had nothing even remotely close in terms of performance and power efficiency. Did Intel raise their pricing? Hell no. A few bucks increases at most here and there. Intel did something clandestinely. AMD does it openly. Looks like in this case it's totally fine.

              AMD64 and Ryzen 5000 CPUs on the other hand? Oh, boy, AMD welcomes fat margins as soon as they can.

              3600 - $200 (Intel is still competitive)
              5600X - $300, or a 50% price increase (Intel is not really competitive)

              3700X - $330 (Intel is still competitive)
              5800X - $450, or a 36% price increase (Intel is not really competitive)

              Athlon 64 FX-57 was released at mind-bogglingly crazy $1,031! Athlon 64 X2 4800+ went for $1,001. People seem to have such short memory about their favourite underdog. It's always only Intel which is bad. F it. I'm so fucking tired of it.

              And of course you will come up with excuses why only AMD can pull off such crap and why Intel and NVIDIA are the worst companies in the world if they do it.
              Intel did raise their price. 9900K raised the price of the top mainstream desktop model from 360USD (8700K) to 490USD. Also Intel raised x600K price to 290USD from historical 240USD. In fact Intel has the most responsibility for AMD pricing R5's at ~300 because Intel made this price point a midrange by introducing i9 into the mainstream lineup. Intel's x600K SKUs are gradually moving towards it too.

              At the end of the day people are either buying products or don't. From the pure economical point of view if people wishfully buy products with the raised price it means that price was too low to begin with. No need to cry about capitalistic reality.

              Comment


              • #87
                Originally posted by Dukenukemx View Post
                Why RISC-V? It isn't relevant yet, and probably won't ever be. Not unless someone like Google picks it up and spends real money to progress it.

                That's because if you run real world applications then the Apple M1/M2 aren't fairly any better than AMD's Rembrandt.

                It's the same test, the first test he runs for power comparison, CB23 MT, that's convenient, I can see his results seem more or less ocnsistent with what we see in the benchmarks I found too for the M1 Pro, and the i9-12900HK seems fairly similar to the i9-11980HK;

                Thanks for this, you've opened my eyes a bit to the possibility that RISC architectures might actually not be taking over and Intel and AMD are catching up;

                I feel it's somewhat of a shocker though still that the M2 graphics are better than the ryzen 7 6800U's; certainly aint no RTX 3090 like apple claimed tho, but I mean AMD is a long time manufacturer of GPUs, they should have an overwhelming advantage in this area.

                Comment


                • #88
                  Originally posted by atomsymbol
                  It is invalid to compare power consumption of CPU cores that are manufactured using different process nodes (such as: 5nm and 7nm) or that are running at different frequencies (such as: 2.2GHz and 3GHz).
                  Depends on your purpose. If you just want to account for power-efficiency in your purchasing decision, then you have to choose from whatever products are available and it's entirely fair to compare them along the lines of their intended use. However, if you're trying to understand the relative efficiency of the microarchitecture, that's a different matter.

                  As for measuring at ISO-frequency, that makes sense if you really care about "IPC". However, IPC is only relevant in context. If a core is designed to clock higher, it will tend to have lower IPC but might still achieve better single-thread performance. And maybe that's what someone really cares about.

                  Originally posted by atomsymbol
                  The µop cache, as well as the loop stream buffer, were introduced to Intel CPUs as a feature that (primarily) saves power and (secondarily) delivers higher IPC. I seriously doubt that ARM or RISC-V (with or without µop cache) can defeat the power-efficiency of x86's µop cache by more than a very small margin.
                  Interestingly, it seems that the AArch64 decoder in ARM CPUs is so cheap that the MOP cache, as they call it, is of very limited value. So much so, that ARM removed it from the A715 and were able to re-invest the die area in widening the decoder for a net-zero effect on performance and a net-gain in efficiency and die area.



                  BTW, the A715 is the first A7x-series core to drop AArch32 support. Maybe that's the real reason they no longer need a MOP cache. It does illustrate how a winning microarchitectural feature for one ISA doesn't necessarily pull its weight for another.
                  Last edited by coder; 31 August 2022, 09:55 AM.

                  Comment


                  • #89
                    Originally posted by Dukenukemx View Post
                    I don't know how you can come to that conclusion, or how that even makes sense?
                    It's because Apple is a products company. They don't sell individual CPUs, so they can afford to spend more on building larger CPUs, if they can shave costs elsewhere. And one place they save money is by not paying the ~60% margins that someone like Intel likes to charge (I'm sure Apple negotiated it far lower, but there's still a markup). Furthermore, Apple is able to charge premium prices for many of their products, which gives them additional room to invest in bigger chips on newer nodes than most of their competitors.

                    The other reason Apple can make larger cores is that they don't make CPUs with very many. If you're Intel or AMD, you need to worry about limiting the costs of your 56-core, 64-core, or 96-core CPUs. That puts downward pressure on core size, which means you need to clock them higher to deliver competitive performance. And that makes them less power-efficient.

                    Comment


                    • #90
                      Originally posted by Dukenukemx View Post
                      ARM is mostly for mobile phones and tablets. ARM is very old, like Panasonic 3DO old. If it hasn't displaced the x86 market, it's safe to say it never will. What about PowerPC, as that was ready to displace x86?
                      AArch64 is only about a decade old. It's kind of like how, when people talk about x86, they don't actually mean the 32-bit ISA from 1985 or the 16-bit ISA from 1978 -- they mean x86-64 from 2004.

                      Originally posted by Dukenukemx View Post
                      Nobody will ever transition towards ARM on desktops and laptops for the same reason why nobody on Android will use x86, because the majority of software is not going to be compatibile.
                      That's funny, because Android does support multiple ISAs, including x86-64 (last I checked)! When Intel was briefly in the phone business, there were actually some x86-64 Android phones that you could buy in a shop. Here's one:



                      Originally posted by Dukenukemx View Post
                      As it stands right now a lot games on Mac OSX isn't updating since the transition to 64-bit only and ARM.
                      From what I've read Apple's x86-64 emulator is quite good. Windows 11 also has one that now handles x86-64, but I don't know how it compares.

                      Comment

                      Working...
                      X