Announcement

Collapse
No announcement yet.

Intel Core i5 13600K + Core i9 13900K "Raptor Lake" Linux Preview

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • qarium
    replied
    Originally posted by atomsymbol
    The value of f(x)=x*x that runs for 0.9 seconds will very quickly exceed the value of f(x)=x that runs for 1.0 seconds.
    to be honest i lag the education to say anything about this. but thank you for the link i will try to educate myself.

    isn't it illogical that in your presented theorie run-to-idle does not work but cpus like "Tachyum Prodigy T16128-AIX, 5nm, 5.7 GHz, TDP 950 Watt​" for sure do in fact do run-for-idle...???....???

    maybe it is about first class choices and second class choices

    maybe the first class choice is to use a better manufacturing node ... thats not an option for intel because of this they use the second best option and target run-to-idle...

    who knows... this cpu:
    "Tachyum Prodigy T16128-AIX, 5nm, 5.7 GHz, TDP 950 Watt​"​ looks like does both use better node 5nm and use the run-to-idle effect with their 950watt TDP...

    Leave a comment:


  • qarium
    replied
    Originally posted by atomsymbol
    You cannot be serious.
    I didn't write that "you said it" - I wrote that you assumed it (i.e: it was implied by the content of your posts).
    Do the math and you will see.
    Idle power (CPU usage = 0%) is a non-zero value. For example: 20 Watts for a desktop CPU; plus the power lost on the PSU (Power Supply Unit efficiency) when supplying those 20 Watts to the CPU.
    Such high-wattage x86-compatible & ARM-compatible CPUs are actually in the production pipeline right now, and they aren't AMD/ARM/Intel CPUs: Tachyum Prodigy T16128-AIX, 5nm, 5.7 GHz, TDP 950 Watt. Read https://www.tachyum.com/resources/wh...ural-overview/
    Note: The penalty of running x86 or ARM code on a Prodigy CPU is approximately 30-40% because it runs via QEMU. But, 5.7Ghz * 0.65 = 3.7GHz and this is a server/rack CPU (it isn't a desktop CPU; Prodigy aims to compete with EPYC and Xeon CPUs).
    well your post is from a logical point of view a contradiction to itself.

    the first half of your post you have the standpoint of max out the TDP and then target for run-to-idle is not an option.

    and the second part of your post you have the point of hey surprise surprise the Tachyum Prodigy T16128-AIX, 5nm, 5.7 GHz, TDP 950 Watt does exactly this.

    and this cpu does even both use a highend manufacturing node 5nm and also max out the TDP and use run-to-idle.

    can you explain to me why intel should not develop a 1000watt TDP Desktop CPU ? and then also target run-to-idle ?

    Leave a comment:


  • qarium
    replied
    Originally posted by atomsymbol
    In your recent posts, you are assuming that the delta of cpu-power-usage is a linear function of [the time saved by going to a higher frequency], but this isn't true. It is a non-linear function.
    dude it does not matter at all if this is an linear function or a non linear function.

    also i did not say that this is a linear function.

    fact is:
    A cpu who has worst TDP numbers like 295-335watt in benchmarks like the intel 12900K/13900K can outside of benchmarks save power for normal people. and the reason for this is the race to idle effect. normal people do not run benchmarks all the time they use tasks they use for what they do in their everyday life and in this power profile cycle it is a fact that idle power consumtion is more important than the max TDP... also if a task is done faster the system runs longer in idle mode.

    people claim intel can not do a improvement on their cpus with their inferior 10nm node but thats wrong
    intel could develop a 1000watt TDP CPU make it 10ghz and promote "run to idle" real life power profile results instead of stupid TDP vs Benchmark numbers...



    Leave a comment:


  • qarium
    replied
    Originally posted by atomsymbol
    You are concentrating your attention to a single number (=10nm) too much. The primary cause of high all-cores-utilized power consumption isn't the 10nm manufacturing process.
    most people claim the FX9590 was a bad cpu... but if you count in the race to idle game it was a good cpu.

    Leave a comment:


  • qarium
    replied
    Originally posted by Sin2x View Post
    piotrj3 You didnt't even bother to check that Zen 4 utilizes 5nm process, not 7, Intel fanboi. The rest of your post is more of the same pathetic cringe as the previous commenter so graciously provided for my amusement. Hell, I almost choked myself laughing seeing " Intel almost nowhere uses 125W figure" when this is literally the official TDP of the processor with 253W max Turbo that is still a whopping 82W lower that what happens in the real world. Keep fighting physics with your quixotic​ deflections and keep selecting statistically outlying benchmarks like your Far Cry 6 video example to prove your points, I can't get enough of you! Idiocracy at its finest.
    there are multible ways to get better end-result get better manufacturing node is one way...
    if you like intel are stuck to outdated node like intel 10nm there are multible ways to in the end get the same or even better energy efficiency.


    intel for the 11000/12000/13000 series play this race to idle game card.

    outside of benchmarks (race to idle does not work for benchmarks) most people save energy with the 12900K/13900K cpus compared to older intel cpu designs and also compared to older AMD cpu designs....

    "Keep fighting physics"

    race to idle war is very good in the keep fighting physics game.

    Leave a comment:


  • qarium
    replied
    Originally posted by piotrj3 View Post
    First any "nm" is purely marketing term. 10nm is any arbitrary number, and Intel 10nm is more efficient then 7nm TSMC.
    Second, by far most important metric in power consumption is frequency as power consumption will increase exponentially with higher frequency. This has nothing to do with 10 nm or 7nm, simply 5.8GHz will be hot regardless if it is from AMD or Intel, just Intel is only one daring to go that far.
    Third. Then you should totally bash on zen 4 because in cinebench/blender 5950X is producing more work per watt then 7950X (and it is quite notable). Why because AMD also pushed frequency up.
    Forth. Intel almost nowhere uses 125W figure, they use in some places 253W figure, but Intel is not strict in enforcing power limits, so a motherboard manufacturers commonly put PL1 unlimited and you have 335W power consumption. In reality Intel set to 253W (official guidenace) is very competitive with Ryzen 7950X set out of box (that will be constantly pushing 230-250W).
    Fifth. Efficiency in rendering test like Cinebench/Blender is garbage (Who uses CPUs for rendering in 2022 please leave the room and rethink your life). Efficiency per watt is important in your gaming session, in webrowsing, idle power consumption, if you make a lot of programming maybe in code compilation. And how does there Intel compare - in gaming 13900k is more efficient FPS/watt then 7950X ( proof https://youtu.be/H4Bm0Wr6OEQ?t=418 ).
    This is a thing, I don't care what is peak power draw (or at least i dont' care much because it might impact a little my choice of PSU, but not dramatically). I care about what is power consumption in gaming (that i do quite a lot). What is power consumption in typical Firefox webrowsing. I care what is power consumption when idle. I don't care about rendering in cinebench/blender. I care little about decompression/compression efficiency as that takes maybe few seconds of a day. Code compilation i might compile something few times but typical projects i am making will again compile in seconds so it is not significant comparing to whole day. 2 hours of gaming per day - yup that will accumulate to quite a lot of energy. 10 hours of firefox or chrome - that will be even more energy. The only workload that is quite heavy on power draw and i can imagine most people care about is software encoding of videos. But they are not perfectly scalling workloads in manner of cores so results aren't looking like in cinebench or blender.
    you claim high frequency is bad for power consumtion this is true for benchmarks but outside of benchmarks you can win the run to idle game with high frequencies means the job is done more faster and the system is in idle mode faster and overall you win in power efficiency.

    Leave a comment:


  • qarium
    replied
    Originally posted by Sin2x View Post
    335W peak consumption​ for a 125W desktop processor reflects the obsolete nature of 10nm manufacturing process well enough for me. Your attempts at defending Intel in this matter look pathetic, to say the least. And no, you can't fight physics with marketing and configuration gimmicks.
    you have to unterstand that in modern hardware design the max TDP is not a relevant metric

    also the manufacturing node if inferior than the intel 10nm node is not an relevant metric to...

    intel could produce a 1000WATT TDP cpu at 10nm who outside of stupid benchmarks always win the race to idle and in the end wins the overall power consumtion.



    of course in benchmark this does not work but in real life workload this works...

    just to make an example in Blender3D 3.4... the hardware with the biggest power consumtion would win the race to idle war
    the hardware with the highest TDP always wins this race to idle game:
    overclocked nvidia RTX4090 at 600watt TDP is the NR1... second place is the intel 13900K also amd 7950X is good because of the high max TDP also the 6950XT gpu is good because of the high TDP...

    slower hardware with less TDP get big penalty because of the race to sleep war... means overall it consume more power on low-TDP hardware..

    Leave a comment:


  • qarium
    replied
    Originally posted by Sin2x View Post
    No, I should be looking at the 335W peak consumption: https://www.anandtech.com/show/17601...600k-review/18
    335W sounds bad but as soon as you unterstand that this is only in stupid benchmarks...

    outside of benchmarking it is the race to idle game: https://en.wikichip.org/wiki/race-to-sleep

    a cpu of 1000watt TDP can be more efficient than a 65watt CPU...

    outside of stupid benchmark the 1000watt TDP cpu would be faster in idle mode and in the end win agaist the 65 watt TDP CPU.

    Leave a comment:


  • qarium
    replied
    Originally posted by atomsymbol
    You should be comparing transistor density per mm^2 (or per mm^3, number of layers).
    if you do the mm² stuff the result is that 10nm intel is 7nm TSMC but thats nonesense because 7nm TSMC wins agaist 10nm intel in performance per watt... intel calls it "intel 7" anyway

    "per mm^3, number of layers"

    this is problematic because there are many technology to get higher mm² transistor density what is not real 3D layers

    for example tranistor all around gate... it is not 3D technology instead clever 2D useage of the gate material....
    "AAFET (gate-all-around field-effect transistor)​"


    2nm IBM node has 3 layers but the mm² transistor density is much higher than only 3 layers of normal tranistors because of Gate-all-around

    very interesting is that in the past the frequency was limited in the 2D design by mm distance and the speed of light

    both gate-all-around and 3D layer design lowers the mm distance and by this makes higher frequencies possible.

    but because of the tranistors itself does no longer shrink the amount of electrons moved per on/off switch of the transistor this means the power consumtion does not go down... thats why the intel 13900K consume """295watt""" but if you calculate the power consumtion overall with the typical workloads in practice the power consumtions go down because of the "race to idle" game played. means if something consume 295watt in benchmarks can in typical practice workloads save energy bill because of race to idle means if you compile faster a developers computer would go faster into idle mode and because of this save energy.

    and if you think only intel play this race to idle game no the amd 7950X CPU is also very good in this game.

    in this "race to idle" game it would be even make sense to build a CPU with TDP of 1000watt at 10GHZ outside of stupid benchmarks most tasks in practice would be done so fast that the computer most of the time would be in idle mode ans race to idle would save so much power it would be unbelievable.



    most people who claim the Nvidia RTX4090 and the intel 13900K and AMD 7950X and the AMD 6950XT is bad hardware because of the high TDP do not unterstand the race to sleep/race to idle topic.

    outside of stupid benchmarks in practice if your work is faster done your system is faster in idle mode and in the end save energy.

    Leave a comment:


  • qarium
    replied
    Originally posted by Drago View Post
    I do not agree with you. Yes, for you, for me, for everybody on this forum the 3rd number, but for much many more people the year or release is more telling number, of how "new" PC they have. Heck, I even have colleagues software developers that didn't know Zen4 is a thing. Frankly having one universal CPU/GPU bechmark, and posting this numbers will be the utter model number. It will always increase.
    AMD C40KG10KU - CPU bench 40K points, GPU bench 10K points, Ultra low power. Which can even compact to 40K10KU.
    AMD, intel, nvidia etc, need to appoint Kronos group to make a benchmark, and all use it.
    there is a much easier way than that just give us the number of billion tranistors used on the chip die and the average frequency.

    would result in 10 billion tranistors clocked at 5ghz... 10B*5ghz...

    over the last 40 years in CPU and GPU business it was always the "tranistors*clock speed"

    sure some designs are more efficient means more performance with less tranistors but in average "tranistors*clock speed" was always true.

    Leave a comment:

Working...
X