Announcement

Collapse
No announcement yet.

Intel Announces 13th Gen "Raptor Lake" - Linux Benchmarks To Come

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • coder
    replied
    Originally posted by AdrianBc View Post
    for such tasks the Intel efficient cores have a performance similar to that of a thread of a big Intel or AMD core.
    LOL, wut?

    No, they have only about 60% the integer performance of a P-core running 1 thread. Where the E-cores are faster is to load them instead of putting a second thread on a P-core.

    Originally posted by AdrianBc View Post
    On the other hand, for scientific computing or other floating-point applications or any other applications that can benefit from using AVX or AVX-512, Zen 4 will beat easily Raptor Lake, because the Intel efficient cores are weak at AVX.
    Don't underestimate them. They're about 54% as fast in FP workloads as a single-threaded P-core. So, the amount of throughput they add is significant, if not huge.

    Originally posted by AdrianBc View Post
    which have prevented Intel to have the best support for an instruction set that they have developed more than 14 years ago (the first public description of the first variant of AVX-512 was in 2008, 3 years before Sandy Bridge, which used the inferior AVX instruction set).
    Sandy Bridge was a 32 nm CPU and it didn't even implement AVX at full 256-bit width. I think they didn't do that until Haswell, which used 22 nm. And Haswell had an infamous clock-throttling issue with AVX2-heavy workloads, although it pales in comparison to the AVX-512 clock throttling problems Intel had on the 14 nm CPUs where they introduced it.

    My point is that what you're talking about is a low-clocked, in-order Larrabee core. You cannot compare that to a high-clocked out-of-order, general-purpose CPU core. Even 2016 was too soon for Intel to deploy AVX-512 on general-purpose cores @ full width. It was a big mistake, due to all of the clock-throttling problems it caused. Possibly 10 nm ESF (AKA "Intel 7") is the first time it really makes sense.

    Leave a comment:


  • WannaBeOCer
    replied
    Originally posted by qarium View Post

    https://gpu.userbenchmark.com/Compar...3933vsm1850973

    it looks like if you do not use raytracing the vega64 beats the A770...

    and right now intels compute stack is not ready and ROCm/Hip works for the Vega64...

    "only uses 180w while a RX Vega 64 uses around ~290w"

    right but there is also a price difference .,.. and if you do not use raytracing the A770 looks like to be a bad choice.

    maybe if you need the 16gb vram for compute then you maybe get a good deal.
    Userbenchmark is a joke and if you notice many of the test are blank for the A770 because no one that’s not under embargo has one yet. Regarding regular rasterization they already announced that in DX12/Vulkan their A750 trades blows with the RTX 3060 while DX11 will be slower. They’re going to be in a similar situation to AMD’s GCN where their DX11 driver was ass then fanboys starting calling it fine wine. There’s no point in buying a Vega/RDNA1 GPU now since neither of them support DX 12 Ultimate meaning lack of mesh shaders.

    Originally posted by atomsymbol

    It is misleading because if I bought a Ryzen 7000 CPU then I would not be experiencing such power consumption, nor such power-efficiency, numbers when using my machine even if I decided to run the same apps as the apps that were tested in those Gamer's Nexus Youtube reviews.

    Secondly, I hope you do understand that the purpose of their testing methodology is to reduce the statistical variance of their measurements - their testing methodology is not there to reproduce mine or your home/office conditions. If you believe that their testing environment is build to reflect actual home and office environments/conditions then you are mistaken. ---- Unless you happen to have a stable supply of liquid nitrogen to your home/office.

    The power-efficiency numbers published on AMD's (or Intel's) slides are actually much more relevant for home/office use cases than what GamersNexus is reporting, if those slides contain power-efficiency curves comparing current-generation CPUs to previous-generation CPUs.

    I am sure that NASA Jet Propulsion laboratory has very advanced testing methodologies as well ---- but the applicability of those methodologies to home and office environments is questionable.

    You're trolling right? Every review I've seen that uses all the cores shows a 7950X using 240w+ in real world workloads. If you're only using your PC to game/office task then you're buying the wrong chip. What did you think was going to happen when you crank up frequency?

    https://www.techpowerup.com/review/a...-7950x/24.html
    Last edited by WannaBeOCer; 29 September 2022, 02:32 PM.

    Leave a comment:


  • coder
    replied
    Originally posted by AdrianBc View Post
    AVX-512 implementation than in the majority of the Intel CPUs (with the exception of Xeon Platinum and similar overpriced Intel SKUs).
    Even Platinum Xeon SP CPUs have serious problems with AVX-512 and clock-throttling. That's addressed somewhat in the Ice Lake generation, though I can't say how much. I expect Sapphire Rapids to be even better, but we'll see.

    Throughput-wise, the dual-FMA Xeon models probably do outperform Zen4 in single-thread AVX-512 performance, on AVX-512 heavy workloads. In mixed and highly-threaded workloads, clock-throttling would probably impede the Xeons too much. Perhaps Sapphire Rapids will retake the lead here, as well.

    Originally posted by AdrianBc View Post
    Rewriting or recompiling programs to use AVX-512 can give a very nice boost to many applications, and it is a much more pleasant instruction set to program in than the previous crippled instruction sets implemented by Intel, i.e. MMX, SSE and AVX.
    Pleasant? I didn't find anything unpleasant about the SSE code I've written, but if you need to do things like scatter/gather, then it's indeed painful to do with them. At least lane-swizzling got a lot better than the old days of MMX.

    Leave a comment:


  • coder
    replied
    Originally posted by tunnelblick View Post
    The first benchmarks were from the lower tier of the cards IIRC?
    Right, but we already know the specs of the mid & upper GPUs, so it's a simple exercise in extrapolation. These things rarely scale at/above linear, so linear extrapolation is basically a best-case scenario.

    Originally posted by tunnelblick View Post
    Intel said in a Digital Foundry video they are in for the long run and they know they still have a lot of work to do, especially when it comes to the driver side of things.

    In general we should at least be happy that there's another one now in the GPU market that can drive some competition.
    My comment was narrowly-targeted at the claim of the A770 launch being "much more exciting" than Raptor Lake. Personally, I think the Raptor Lake vs. Zen 4 race is a lot more exciting.

    As far as Intel staying in the GPU race, I agree. Their drivers can pretty much only get better from here, and we're all beneficiaries of them staying in the game. I've used their iGPUs in compute workloads and plan to kick the tires of their dGPUs.

    Leave a comment:


  • coder
    replied
    Originally posted by piotrj3 View Post
    Issue is Intel was going for high frequencies while AMD in the past was going for multichip design (easier to manufacture) allowing AMD to simply offer more cores for same fab/engineering price. So AMD could clock 2 chips at lower frequencies and due to more cores contest with Intel on multicore performance territory.
    This was true until Zen 3. Once Zen 3 happened, Intel actually had to raise clock speeds & power consumption of its 14 nm CPUs even to compete in single-threaded performance!

    That held until Alder Lake, which enabled Intel to comfortably regain the single-threaded lead, although they seemed reluctant to take their foot off the gas (i.e. clock speeds).

    Originally posted by piotrj3 View Post
    Now AMD with technically superior node clocked chips high making 7950X do significantly less work per watt then 5950X (that is on inferior node).
    Leaving aside the issue of the E-cores, let's stay focused on generational power-efficiency improvements. AMD delivered this:





    So, their fundamental efficiency indeed improved. This will be virtually impossible for Intel to do in Raptor Lake, because they have the same microarchitecture being made on virtually the same process node. So, fundamental efficiency will not drastically change.

    We can also see that AMD traded some of those efficiency gains for better performance, by increasing clock speeds. Intel will do the same. However, by not starting from a lower base like AMD, Intel's single-threaded efficiency can pretty much only get worse, in Gen 13. If they kept the same clocks as Gen 12, then we could see some small improvement, but they've already said they won't.

    Originally posted by piotrj3 View Post
    Intel meanwhile goes exactly opposite way - doubles E cores to improve efficiency.
    ​The main place where Raptor Lake can possibly lower power consumption is in workloads with about 24 threads, because half of those threads will now move to the additional E-cores instead of over-taxing the 8 P-cores. In all-core workloads, the throughput added via 8 additional E-cores should actually enable better perf/W than Alder Lake. The pity is that power consumption of such workloads is so very high, due to their aggressive clocking.

    However, it's incorrect to say that Raptor Lake is chiefly about improving power-efficiency. If that were true, they wouldn't be increasing clock speeds, as well. What Intel is doing with Raptor Lake is to look for performance gains anywhere they can find them. Faster clock speeds, bigger L2 cache, faster DDR5, and more E-cores. It's all really about performance.

    Originally posted by piotrj3 View Post
    Also keep in mind AMD had only power efficiency crown in cases multi-chip architecture could be used. 12600k was more power efficient then 5800X, just when 2 chips of AMD was involved like 5900x/5950X efficiency crown was going to AMD.
    That's not really true. AMD's APUs were much more power-efficient. The 5800X was an outlier, in terms of power-efficiency for the 5000-series.

    If their next-gen APUs remain monolithic, then I think it'll be a similar story. However, the penalty Ryzen 7000's MCM architecture should be lower, now that the I/O Die is 6 nm (in the 5000 series it was either 14 nm or 12 nm).

    Leave a comment:


  • scottishduck
    replied
    Originally posted by atomsymbol

    Just a note: It isn't true that "no one batted an eye". I clicked the DISLIKE button on two videos published by the same Youtube channel you are referring to, [https://www.youtube.com/watch?v=s04TOQkzv3c and https://www.youtube.com/watch?v=LJeEd7_Cv90] because those videos are a misleading way of how to compare power-efficiency across CPUs. But clicking the DISLIKE buttons is all I can do from my position, and I don't intend to post comments to those Youtube videos.
    Fanboy feelings being hurt doesn’t make something misleading. They have an open and well documented testing methodology.

    Leave a comment:


  • ms178
    replied
    Originally posted by scottishduck View Post

    The power efficiency claims are nonsense. Look at the reviews. The chips are also designed to intentionally hit a constant 95C under load. It’s a ridiculous design decision by AMD.
    der8auer already has a direct-die and delidding kit in the works for everyone who wants to shave 20 degrees off of that 95 C mark and doesn't want to limit the TDP. He also argues that AMD could have made a different trade-off when it comes to the height of the heatspreader vs. cooler compatibility.

    Leave a comment:


  • scottishduck
    replied
    Originally posted by coder View Post
    You're joking, right? Their "poor showing" has them beating Alder Lake i9-12900K by 22.9% (geomean):



    And while lowing launch prices vs. previous generation & maintaining the same average power consumption vs. 5950X! Intel will not be able to say the same!
    The power efficiency claims are nonsense. Look at the reviews. The chips are also designed to intentionally hit a constant 95C under load. It’s a ridiculous design decision by AMD.

    Leave a comment:


  • coder
    replied
    Originally posted by scottishduck View Post
    7000 series seems like a poor showing by AMD.
    You're joking, right? Their "poor showing" has them beating Alder Lake i9-12900K by 22.9% (geomean):



    And while lowing launch prices vs. previous generation & maintaining the same average power consumption vs. 5950X! Intel will not be able to say the same!

    Leave a comment:


  • qarium
    replied
    Originally posted by WannaBeOCer View Post
    What nonsense are you making up? The A770 is faster than a RTX 3060 which is faster than a RX Vega 64. It also has dedicated hardware for ray tracing and only uses 180w while a RX Vega 64 uses around ~290w
    Raja Koduri and his great team made Arc who is the founder of RDNA. Which is why AMD sent him a card when he left.
    https://www.pcgamer.com/amd-reunites...graphics-card/
    Unlike AMDs consumer GPUs it has tensor accelerators and with 16GB of VRAM that’s going to be a fantastic consumer GPU to be introduced to deep learning utilizing their OneAPI.
    Based on 95,879 user benchmarks for the AMD RX Vega-64 and the Intel Arc A770, we rank them both on effective speed and value for money against the best 714 GPUs.


    it looks like if you do not use raytracing the vega64 beats the A770...

    and right now intels compute stack is not ready and ROCm/Hip works for the Vega64...

    "only uses 180w while a RX Vega 64 uses around ~290w"

    right but there is also a price difference .,.. and if you do not use raytracing the A770 looks like to be a bad choice.

    maybe if you need the 16gb vram for compute then you maybe get a good deal.

    Leave a comment:

Working...
X