Announcement

Collapse
No announcement yet.

Intel Announces 13th Gen "Raptor Lake" - Linux Benchmarks To Come

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • drakonas777
    replied
    Originally posted by coder View Post
    That used to be the case, but if PPT is 1.35x of TDP and unlimited in time, then it renders TDP a complete fiction for any purpose.

    This situation has gotten so ridiculous that we really need governments to start dictating how these metrics should be quantified, like they do the fuel-efficiency metrics for automobiles. That's the only way to stop this madness.
    I'm not sure I understand your point. Well, at least in theory it's possible for TDP be lower than PPT given only a fraction of PPT is required to sustain base frequency. For example, TDP is 125W, PPT is 170, you put on 125W solution, your CPU starts to draw more than 125W until it reaches a thermal limit during which algorithm will drop frequency and voltage until power draw is reduced at say ~130W which would be sufficient for base frequency. I'm not saying this is the case for ZEN4, i'm saying this is possible in theory. Though it feels to me that 105W for 7700X and 170W for 7900X/7950X should be really enough to sustain base, power scaling shows that 170W gives around 80-85% (if i recall correctly nuilzoid video) performance for 7950X so it seems you can actually even get some boost there. The price is of course CPU is going to be at very high temp.
    Last edited by drakonas777; 03 October 2022, 08:12 AM.

    Leave a comment:


  • arQon
    replied
    Originally posted by piotrj3 View Post
    Anyway my issue is that AMD changed definition of their TDP (what video exactly mentions). Before TDP was actually the power your CPU drew from EPS rail.
    ... What?! No it wasn't, and it was never that even back in the days when Intel - which is, incidentally, not the same company as AMD - still even pretended it was.

    > Intel meanwhile implies 2 things one is base power draw and boost power draw and with exception of some AVX512 workloads you will not break that boost power draw.

    Again: ... What?! Intel has been lying about TDP since before you were born.

    coder

    re TDP in general: it's a fiction. It's always been a fiction, but now it's just absurdly so (in much the same way that lies in, say, politics grow over time, like boiling a frog).

    That's it. That's literally all there is to it.

    The "rough" TDP is an imaginary thermal output under an imaginary load with an imaginary cooler in an imaginary environment, which once upon a time was based on vaguely-realistic numbers for at most one of those terms.
    The "official" TDP is whatever nice round number is vaguely within cannon range of the original fictitious number that Marketing likes, and is slightly lower per perf unit than whatever the competition's is.

    That's *before* the creation of boost clocks, let alone the infinite boost clocks that have been around since ?Sandy? ?Ivy?.

    AMD's 90W sustained in 65W mode is, I suspect, a bug rather than willfully deceptive **, but Intel has been "off" by staggering amounts at times for years now on multiple chips, so it's possible AMD is just following suit because it has to. Obviously, if one company is misrepresenting its CPUs by well over 50W, and one is "only" doing so by 10W, the latter is going to get creamed on perf/W judgements unless they have a design that's 20%+ more performant at a given power draw. We've seen one of those in the past decade, thanks to 14++++ vs TSMC7 Zen, so it can happen, but it's pretty rare and not something you can bet the company on year after year for a decade or more.

    ** Simply because at the other power draw targets it's pretty close, but the 65W behavior is an outlier so I'm giving them the benefit of the doubt. I won't be shocked if no AGESA/microcode update fixes it for several months though, or indeed, ever.

    As far as "then it renders TDP a complete fiction for any purpose" goes? Yeah, pretty much. I'm not sure exactly when it went from "kinda sorta at least *vaguely* representative of reality" to "not even remotely so" - like I say, it was boiling a frog - but it's been there for a very long time now, and I don't see that ever improving. (Unless forced to by some EU regulation or something, but they've got bigger fish to fry right now, and Intel has deep pockets).

    edit> I'm buried in post-vacation backlog right now, so I'm trying to minimize posting, but if you need a couple of examples to understand how the process is perverted let me know.
    Last edited by arQon; 03 October 2022, 02:46 AM.

    Leave a comment:


  • coder
    replied
    Originally posted by drakonas777 View Post
    I'd say TDP is a rough estimate for thermal solution, which guarantees that CPU is going to work at least at base frequency.
    That used to be the case, but if PPT is 1.35x of TDP and unlimited in time, then it renders TDP a complete fiction for any purpose.

    This situation has gotten so ridiculous that we really need governments to start dictating how these metrics should be quantified, like they do the fuel-efficiency metrics for automobiles. That's the only way to stop this madness.

    Leave a comment:


  • piotrj3
    replied
    Originally posted by coder View Post
    Try 13%.



    I don't really get what you're complaining about. Isn't it the dream of overclockers to have a CPU that's only thermally-limited? If you want to impose lower power limits, you can do it in BIOS.


    That's not at all atypical, when you do extreme overclocking, which is essentially what he did.


    I'll grant you this one point: that TDP is misleading when the actual PPT is 1.35x that much. Based on my simplistic understanding, I don't know why they're not equal. If someone can point me at a compelling rationale, I'd appreciate it.


    power consumption != power efficiency. Also, power efficiency changes, depending on the SKU and TDP configuration. It's not a single number that characterizes all models in all configurations.

    Because of that, it really matters why you're looking at it. If you just want to compare the microarchitecture and manufacturing process, then you will want to compare comparable models running in a similar power envelope (and not a similarly-named power envelope, but as close as you can get to one that's actually equivalent).

    If you want to compare the typical end user power efficiency, then compare comparable models at stock settings, with a normal case & cooler, running on a defined workload.

    People tend to take the the highest number from the most extreme part, in the highest-power configuration and use that to characterize the entire product line. However, that's only applicable to those intending to run that part in that configuration.
    GN got 251W on stock not by extreme overclocking. Only thing you need to get 250W+ is very good cooler

    Leave a comment:


  • drakonas777
    replied
    I'd say TDP is a rough estimate for thermal solution, which guarantees that CPU is going to work at least at base frequency. This is mostly for shit tier PC builders, which can put a minimal and cheap cooler and more or less be sure CPU won't throttle. Actual CPU power consumption is higher, so if you want to sustain high boosts you need a better thermal solution. This is my interpretation why TDP parameter exists and why it's lower then actual consumption.

    As for ZEN4, it's obvious that AMD made stock power parameters stupid to compete with Intel's stupid PL2. So in reality we should evaluate CPU performance in some sane power range, say 65-150W range. I don't see the point in performance graphs where CPUs draw 240+ W on mainstream desktop. It's insane. AFAIK performance gain in running ZEN4 beyond 150W are basically negligible, so extra 100W for <~15% is just irrational. It should not be a default.
    Last edited by drakonas777; 02 October 2022, 09:07 AM.

    Leave a comment:


  • smitty3268
    replied
    Originally posted by coder View Post
    I'll grant you this one point: that TDP is misleading when the actual PPT is 1.35x that much. Based on my simplistic understanding, I don't know why they're not equal. If someone can point me at a compelling rationale, I'd appreciate it.
    TDP is a made-up marketing term that is meaningless, beyond a basic correlation with power use. Only correlation within the same CPU lineup, not across generations, as AMD and Intel fully reserve the right to change variables in their made-up calculations at any point. An example factor that goes into it is "room temperature during testing". Which room temp did they use to get the numbers they are advertising? No idea, they won't tell you that.

    They advertise a 125W TDP part because marketing thinks it sounds better than saying it's a 142W part. Simple as that.

    Leave a comment:


  • coder
    replied
    Originally posted by AdrianBc View Post
    Moreover, the lower SMT gain for floating-point applications is also well known and expected, because the execution time of such programs is dominated by loops with perfect branch prediction and they include a large percentage of computational instructions that can be overlapped over loads, and most data is reused several times, so it is loaded from various cache memory levels, not from the main memory.

    Because of this, most floating-point applications can achieve a very high percentage of use of the execution units, so there are few opportunities for executing the second thread of a core. For floating-point applications, it is not uncommon to achieve better performance by disabling SMT.​
    Thanks for acknowledging that point. I hope you'll further acknowledge that it blows a hole in your theory that 1E = 1P2T / 2, or that such a thing was even a design requirement of Intel's. This is too simplistic.

    They created the Thread Director specifically to aid the OS in more effective thread scheduling, in an acknowledgement of the challenges it poses.

    Leave a comment:


  • coder
    replied
    Originally posted by AdrianBc View Post
    In the current manufacturing processes, the cost of chip revisions has become exceedingly big, of millions of $ for the simplest changes.

    So the CPU designing companies do not make new revisions except for bugs so serious that they would expose the companies to legal liabilities, e.g. security bugs or data corruption bugs.
    Right. They wouldn't do a stepping just for this. However, I still wonder how many steppings they typically do over a product's lifetime. For instance, the B2 stepping of the 5800X looks to have some nice improvements that are quite plausibly just a collection of errata fixes.



    One thing I like about Raptor Lake is that because there are no major microarchitecture changes in it, I see it as basically just a patched and tuned version of Alder Lake. In other words, you could look at it as the CPU Alder Lake was meant to be.

    Traditionally, I've been a late-adopter, hoping to benefit from various fixes in later chip steppings, board revisions, and firmware fixes.

    Originally posted by AdrianBc View Post
    If you look at the errata lists for the Intel CPUs (euphemistically named "Specification Update") and for the AMD CPUs (euphemistically named "Revision Guide"), for each CPU model there may be up to one hundred bugs that have the resolution "Won't fix".
    That doesn't necessarily mean that some aren't opportunistically fixed in later steppings.

    Leave a comment:


  • AdrianBc
    replied
    Originally posted by coder View Post
    According to https://www.anandtech.com/show/17047...d-complexity/9 8P2T is faster than 8P1T (both DDR5) by approximately:
    • 17.5% faster @ SPEC2017int
    • 2.1% faster @ SPEC2017fp

    So, it's a little worse than you say for int, and much worse for fp. FWIW, the numbers I quoted previously were based on the single-thread aggregate scores comparing 1P1T vs. 1E.

    A gain for SMT of 20% to 25%, or even sometimes up to 30%, is typical when you run completely unrelated programs on the two threads of a core, because only then there are good opportunities that the loads from the main memory or the branch mispredictions from one thread will coincide in time with instructions that can be executed immediately from the other thread.

    The most common application that gains a lot from SMT is compiling a large software project, where each thread compiles a different source file, and the threads not only have frequent stalls due to branch mispredictions and cache misses, but there are also frequent stalls while waiting for SSD or HDD operations.


    The SPEC benchmark is notorious for having a low gain from SMT, which is expected, because all threads run the same program.

    Moreover, the lower SMT gain for floating-point applications is also well known and expected, because the execution time of such programs is dominated by loops with perfect branch prediction and they include a large percentage of computational instructions that can be overlapped over loads, and most data is reused several times, so it is loaded from various cache memory levels, not from the main memory.

    Because of this, most floating-point applications can achieve a very high percentage of use of the execution units, so there are few opportunities for executing the second thread of a core. For floating-point applications, it is not uncommon to achieve better performance by disabling SMT.


    So, the numbers presented by you are indeed typical for the SPEC benchmark, and they may also be representative for certain multi-threaded programs that load all the threads with similar computations, but they are not typical for the SMT gain when random programs are executed on multiple threads, when the gain can be much higher.


    So 20% can be considered as a median value between the SMT gains in different use cases.





    Leave a comment:


  • Anux
    replied
    Originally posted by piotrj3 View Post

    The issue it is subjective. Anyway my issue is that AMD changed definition of their TDP (what video exactly mentions). Before TDP was actually the power your CPU drew from EPS rail.
    ​Are there still people that don't know the meaning of TDP? It's been discussed everywhere and it's on wikipedia. Thermal Design Power like the name suggests is not electrical power consumption, it only correlates with it. If you raise the Tjunction temp the TDP gets lower while power consumption might actually raise a little.

    Leave a comment:

Working...
X