Announcement

Collapse
No announcement yet.

Intel Announces 13th Gen "Raptor Lake" - Linux Benchmarks To Come

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by WannaBeOCer View Post

    At the end of the day Intel screwed up by removing AVX-512 from consumer hardware due to haters of the instruction set for example Linus Torvalds: https://www.phoronix.com/news/Linus-Torvalds-On-AVX-512

    When it comes to the actual workload I doubt AVX-512 workloads are more efficient on a non-native implement of AMD's which is "double pumping" while Intel uses a native implementation of AVX-512. I have a 12700K from the first batch which still supports AVX-512 so if I do get my hands on a 7700X I'd definitely will test the two.

    Where are you getting this made up information about Intel's E cores being as quick as a single thread from HT/SMT? Gracemont E cores IPC is on par with Skylake cores used from Skylake up to Comet Lake but clocked lower and on a new node which is why Gracemont is efficient. Intel's hybrid chips aren't big.LITTLE like ARM. They're big.BIGGER, Golden Cove/Raptor Cove cores are massive compared to traditional CPU cores like Gracemont/Skylake cores.

    https://chipsandcheese.com/2021/12/2...he-atom-cores/

    Edit: I want to add regarding AI, Gracemont cores are efficient to run inference due to the VNNI-INT8 units


    First of all the fact that Intel has a better AVX-512 implementation that is not "double-pumped" is a myth that can be easily debunked if someone reads the Intel Optimization Manuals.

    The implementation of almost all AVX-512 instructions on the Intel CPUs is also "double-pumped", in the sense that the throughput for 512-bit instructions is the same as for 256-bit instructions, because the 512-bit instructions are executed by combining two 256-bit execution pipelines into a single one.

    The only exception is that for the FMA instruction some of the most expensive Intel CPUs, with a price of thousands of dollars have a second 512-bit FMA unit, so only for FMA they have a double throughput when 512-bit instructions are used. This 2nd FMA unit is present only in all Xeon Platinum, a part of the Xeon Gold and in those of the Xeon W models that support AVX-512. It was also present in a few of the HEDT Intel CPUs.

    All the Intel CPUs and AMD CPUs that support AVX-512 have exactly the same throughput: two 512-bit instructions per clock cycle.

    The difference between the various models is only in the restrictions that may forbid both instructions executed in a clock cycle to be certain of the more complex instructions.

    On most Intel CPUs, only 1 of the 2 instructions may be an FMA or an FADD.
    On Zen 4, only 1 of the 2 instructions may be an FMA, but the other can be an FADD, so this is better than for most Intel CPUs with AVX-512.
    On Intel Xeon Platinum and similar CPUs, both instructions can be an FMA.

    Not only Zen 4 has a better throughput than the majority of the Intel CPUs by being able to do both an FMA and an FADD per cycle, but it has also a double throughput for certain kinds of shuffle and permute instructions.

    There are also various other improvements described at:



    The most expensive models of the future Sapphire Rapids CPUs will again have a double FMA throughput per clock cycle, and maybe Intel will give up on the market segmentation and they will no longer disable the 2nd FMA unit on the cheap CPUs.

    Even if that would happen, Zen 4 will continue to have a better AVX-512 implementation than most of the already existing Intel CPUs.


    Comment


    • #52
      Originally posted by piotrj3 View Post

      So now you are comparing one fake TDP claims of one company (That are already proven by reviers to be fake) with Intel TDP claims that are probably fake.
      References please ?

      Meanwhile here is another study rather by people who have actually had both chips in hand rather than some random in a forum who can't read and makes up fantasy stories......


      Last edited by Slartifartblast; 29 September 2022, 05:02 AM.

      Comment


      • #53
        Originally posted by tunnelblick View Post
        I hope they offer better power consumption than AMD.
        I hope it comes with a free pony.. Why are you hoping for things we know it wont have?

        Comment


        • #54
          Originally posted by WannaBeOCer View Post
          What nonsense are you making up? The A770 is faster than a RTX 3060 which is faster than a RX Vega 64. It also has dedicated hardware for ray tracing and only uses 180w while a RX Vega 64 uses around ~290w
          Raja Koduri and his great team made Arc who is the founder of RDNA. Which is why AMD sent him a card when he left.
          https://www.pcgamer.com/amd-reunites...graphics-card/
          Unlike AMDs consumer GPUs it has tensor accelerators and with 16GB of VRAM that’s going to be a fantastic consumer GPU to be introduced to deep learning utilizing their OneAPI.
          Based on 95,879 user benchmarks for the AMD RX Vega-64 and the Intel Arc A770, we rank them both on effective speed and value for money against the best 714 GPUs.


          it looks like if you do not use raytracing the vega64 beats the A770...

          and right now intels compute stack is not ready and ROCm/Hip works for the Vega64...

          "only uses 180w while a RX Vega 64 uses around ~290w"

          right but there is also a price difference .,.. and if you do not use raytracing the A770 looks like to be a bad choice.

          maybe if you need the 16gb vram for compute then you maybe get a good deal.
          Phantom circuit Sequence Reducer Dyslexia

          Comment


          • #55
            Originally posted by scottishduck View Post
            7000 series seems like a poor showing by AMD.
            You're joking, right? Their "poor showing" has them beating Alder Lake i9-12900K by 22.9% (geomean):



            And while lowing launch prices vs. previous generation & maintaining the same average power consumption vs. 5950X! Intel will not be able to say the same!

            Comment


            • #56
              Originally posted by coder View Post
              You're joking, right? Their "poor showing" has them beating Alder Lake i9-12900K by 22.9% (geomean):



              And while lowing launch prices vs. previous generation & maintaining the same average power consumption vs. 5950X! Intel will not be able to say the same!
              The power efficiency claims are nonsense. Look at the reviews. The chips are also designed to intentionally hit a constant 95C under load. It’s a ridiculous design decision by AMD.

              Comment


              • #57
                Originally posted by scottishduck View Post

                The power efficiency claims are nonsense. Look at the reviews. The chips are also designed to intentionally hit a constant 95C under load. It’s a ridiculous design decision by AMD.
                der8auer already has a direct-die and delidding kit in the works for everyone who wants to shave 20 degrees off of that 95 C mark and doesn't want to limit the TDP. He also argues that AMD could have made a different trade-off when it comes to the height of the heatspreader vs. cooler compatibility.

                Comment


                • #58
                  Originally posted by atomsymbol

                  Just a note: It isn't true that "no one batted an eye". I clicked the DISLIKE button on two videos published by the same Youtube channel you are referring to, [https://www.youtube.com/watch?v=s04TOQkzv3c and https://www.youtube.com/watch?v=LJeEd7_Cv90] because those videos are a misleading way of how to compare power-efficiency across CPUs. But clicking the DISLIKE buttons is all I can do from my position, and I don't intend to post comments to those Youtube videos.
                  Fanboy feelings being hurt doesn’t make something misleading. They have an open and well documented testing methodology.

                  Comment


                  • #59
                    Originally posted by piotrj3 View Post
                    Issue is Intel was going for high frequencies while AMD in the past was going for multichip design (easier to manufacture) allowing AMD to simply offer more cores for same fab/engineering price. So AMD could clock 2 chips at lower frequencies and due to more cores contest with Intel on multicore performance territory.
                    This was true until Zen 3. Once Zen 3 happened, Intel actually had to raise clock speeds & power consumption of its 14 nm CPUs even to compete in single-threaded performance!

                    That held until Alder Lake, which enabled Intel to comfortably regain the single-threaded lead, although they seemed reluctant to take their foot off the gas (i.e. clock speeds).

                    Originally posted by piotrj3 View Post
                    Now AMD with technically superior node clocked chips high making 7950X do significantly less work per watt then 5950X (that is on inferior node).
                    Leaving aside the issue of the E-cores, let's stay focused on generational power-efficiency improvements. AMD delivered this:





                    So, their fundamental efficiency indeed improved. This will be virtually impossible for Intel to do in Raptor Lake, because they have the same microarchitecture being made on virtually the same process node. So, fundamental efficiency will not drastically change.

                    We can also see that AMD traded some of those efficiency gains for better performance, by increasing clock speeds. Intel will do the same. However, by not starting from a lower base like AMD, Intel's single-threaded efficiency can pretty much only get worse, in Gen 13. If they kept the same clocks as Gen 12, then we could see some small improvement, but they've already said they won't.

                    Originally posted by piotrj3 View Post
                    Intel meanwhile goes exactly opposite way - doubles E cores to improve efficiency.
                    ​The main place where Raptor Lake can possibly lower power consumption is in workloads with about 24 threads, because half of those threads will now move to the additional E-cores instead of over-taxing the 8 P-cores. In all-core workloads, the throughput added via 8 additional E-cores should actually enable better perf/W than Alder Lake. The pity is that power consumption of such workloads is so very high, due to their aggressive clocking.

                    However, it's incorrect to say that Raptor Lake is chiefly about improving power-efficiency. If that were true, they wouldn't be increasing clock speeds, as well. What Intel is doing with Raptor Lake is to look for performance gains anywhere they can find them. Faster clock speeds, bigger L2 cache, faster DDR5, and more E-cores. It's all really about performance.

                    Originally posted by piotrj3 View Post
                    Also keep in mind AMD had only power efficiency crown in cases multi-chip architecture could be used. 12600k was more power efficient then 5800X, just when 2 chips of AMD was involved like 5900x/5950X efficiency crown was going to AMD.
                    That's not really true. AMD's APUs were much more power-efficient. The 5800X was an outlier, in terms of power-efficiency for the 5000-series.

                    If their next-gen APUs remain monolithic, then I think it'll be a similar story. However, the penalty Ryzen 7000's MCM architecture should be lower, now that the I/O Die is 6 nm (in the 5000 series it was either 14 nm or 12 nm).

                    Comment


                    • #60
                      Originally posted by tunnelblick View Post
                      The first benchmarks were from the lower tier of the cards IIRC?
                      Right, but we already know the specs of the mid & upper GPUs, so it's a simple exercise in extrapolation. These things rarely scale at/above linear, so linear extrapolation is basically a best-case scenario.

                      Originally posted by tunnelblick View Post
                      Intel said in a Digital Foundry video they are in for the long run and they know they still have a lot of work to do, especially when it comes to the driver side of things.

                      In general we should at least be happy that there's another one now in the GPU market that can drive some competition.
                      My comment was narrowly-targeted at the claim of the A770 launch being "much more exciting" than Raptor Lake. Personally, I think the Raptor Lake vs. Zen 4 race is a lot more exciting.

                      As far as Intel staying in the GPU race, I agree. Their drivers can pretty much only get better from here, and we're all beneficiaries of them staying in the game. I've used their iGPUs in compute workloads and plan to kick the tires of their dGPUs.

                      Comment

                      Working...
                      X