Announcement

Collapse
No announcement yet.

Intel Core i5 13600K + Core i9 13900K "Raptor Lake" Linux Preview

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by atomsymbol

    No, reviews of Ryzen 7000 and Core 13000 do not reflect real-world usage of those CPUs. Instead, you should be looking at how the power consumption in multi-core scenarios can be configured in order to achieve an optimal power-performance ratio.
    335W peak consumption​ for a 125W desktop processor reflects the obsolete nature of 10nm manufacturing process well enough for me. Your attempts at defending Intel in this matter look pathetic, to say the least. And no, you can't fight physics with marketing and configuration gimmicks.
    Last edited by Sin2x; 21 October 2022, 05:24 PM.

    Comment


    • #22
      Originally posted by Sin2x View Post

      335W peak consumption​ for a 125W desktop processor reflects the obsolete nature of 10nm manufacturing process well enough for me. Your attempts at defending Intel in this matter look pathetic, to say the least. And no, you can't fight physics with marketing and configuration gimmicks.
      First any "nm" is purely marketing term. 10nm is any arbitrary number, and Intel 10nm is more efficient then 7nm TSMC.

      Second, by far most important metric in power consumption is frequency as power consumption will increase exponentially with higher frequency. This has nothing to do with 10 nm or 7nm, simply 5.8GHz will be hot regardless if it is from AMD or Intel, just Intel is only one daring to go that far.

      Third. Then you should totally bash on zen 4 because in cinebench/blender 5950X is producing more work per watt then 7950X (and it is quite notable). Why because AMD also pushed frequency up.

      Forth. Intel almost nowhere uses 125W figure, they use in some places 253W figure, but Intel is not strict in enforcing power limits, so a motherboard manufacturers commonly put PL1 unlimited and you have 335W power consumption. In reality Intel set to 253W (official guidenace) is very competitive with Ryzen 7950X set out of box (that will be constantly pushing 230-250W).

      Fifth. Efficiency in rendering test like Cinebench/Blender is garbage (Who uses CPUs for rendering in 2022 please leave the room and rethink your life). Efficiency per watt is important in your gaming session, in webrowsing, idle power consumption, if you make a lot of programming maybe in code compilation. And how does there Intel compare - in gaming 13900k is more efficient FPS/watt then 7950X ( proof https://youtu.be/H4Bm0Wr6OEQ?t=418 ).

      This is a thing, I don't care what is peak power draw (or at least i dont' care much because it might impact a little my choice of PSU, but not dramatically). I care about what is power consumption in gaming (that i do quite a lot). What is power consumption in typical Firefox webrowsing. I care what is power consumption when idle. I don't care about rendering in cinebench/blender. I care little about decompression/compression efficiency as that takes maybe few seconds of a day. Code compilation i might compile something few times but typical projects i am making will again compile in seconds so it is not significant comparing to whole day. 2 hours of gaming per day - yup that will accumulate to quite a lot of energy. 10 hours of firefox or chrome - that will be even more energy. The only workload that is quite heavy on power draw and i can imagine most people care about is software encoding of videos. But they are not perfectly scalling workloads in manner of cores so results aren't looking like in cinebench or blender.
      Last edited by piotrj3; 21 October 2022, 06:00 PM.

      Comment


      • #23
        piotrj3 You didnt't even bother to check that Zen 4 utilizes 5nm process, not 7, Intel fanboi. The rest of your post is more of the same pathetic cringe as the previous commenter so graciously provided for my amusement. Hell, I almost choked myself laughing seeing " Intel almost nowhere uses 125W figure" when this is literally the official TDP of the processor with 253W max Turbo that is still a whopping 82W lower that what happens in the real world. Keep fighting physics with your quixotic​ deflections and keep selecting statistically outlying benchmarks like your Far Cry 6 video example to prove your points, I can't get enough of you! Idiocracy at its finest.
        Last edited by Sin2x; 21 October 2022, 06:53 PM.

        Comment


        • #24
          Originally posted by Drago View Post
          I do not agree with you. Yes, for you, for me, for everybody on this forum the 3rd number, but for much many more people the year or release is more telling number, of how "new" PC they have. Heck, I even have colleagues software developers that didn't know Zen4 is a thing. Frankly having one universal CPU/GPU bechmark, and posting this numbers will be the utter model number. It will always increase.
          AMD C40KG10KU - CPU bench 40K points, GPU bench 10K points, Ultra low power. Which can even compact to 40K10KU.
          AMD, intel, nvidia etc, need to appoint Kronos group to make a benchmark, and all use it.
          there is a much easier way than that just give us the number of billion tranistors used on the chip die and the average frequency.

          would result in 10 billion tranistors clocked at 5ghz... 10B*5ghz...

          over the last 40 years in CPU and GPU business it was always the "tranistors*clock speed"

          sure some designs are more efficient means more performance with less tranistors but in average "tranistors*clock speed" was always true.
          Phantom circuit Sequence Reducer Dyslexia

          Comment


          • #25
            Originally posted by atomsymbol
            You should be comparing transistor density per mm^2 (or per mm^3, number of layers).
            if you do the mm² stuff the result is that 10nm intel is 7nm TSMC but thats nonesense because 7nm TSMC wins agaist 10nm intel in performance per watt... intel calls it "intel 7" anyway

            "per mm^3, number of layers"

            this is problematic because there are many technology to get higher mm² transistor density what is not real 3D layers

            for example tranistor all around gate... it is not 3D technology instead clever 2D useage of the gate material....
            "AAFET (gate-all-around field-effect transistor)​"
            https://en.wikipedia.org/wiki/Multigate_device

            2nm IBM node has 3 layers but the mm² transistor density is much higher than only 3 layers of normal tranistors because of Gate-all-around

            very interesting is that in the past the frequency was limited in the 2D design by mm distance and the speed of light

            both gate-all-around and 3D layer design lowers the mm distance and by this makes higher frequencies possible.

            but because of the tranistors itself does no longer shrink the amount of electrons moved per on/off switch of the transistor this means the power consumtion does not go down... thats why the intel 13900K consume """295watt""" but if you calculate the power consumtion overall with the typical workloads in practice the power consumtions go down because of the "race to idle" game played. means if something consume 295watt in benchmarks can in typical practice workloads save energy bill because of race to idle means if you compile faster a developers computer would go faster into idle mode and because of this save energy.

            and if you think only intel play this race to idle game no the amd 7950X CPU is also very good in this game.

            in this "race to idle" game it would be even make sense to build a CPU with TDP of 1000watt at 10GHZ outside of stupid benchmarks most tasks in practice would be done so fast that the computer most of the time would be in idle mode ans race to idle would save so much power it would be unbelievable.

            https://en.wikichip.org/wiki/race-to-sleep

            most people who claim the Nvidia RTX4090 and the intel 13900K and AMD 7950X and the AMD 6950XT is bad hardware because of the high TDP do not unterstand the race to sleep/race to idle topic.

            outside of stupid benchmarks in practice if your work is faster done your system is faster in idle mode and in the end save energy.

            Phantom circuit Sequence Reducer Dyslexia

            Comment


            • #26
              Originally posted by Sin2x View Post
              No, I should be looking at the 335W peak consumption: https://www.anandtech.com/show/17601...600k-review/18
              335W sounds bad but as soon as you unterstand that this is only in stupid benchmarks...

              outside of benchmarking it is the race to idle game: https://en.wikichip.org/wiki/race-to-sleep

              a cpu of 1000watt TDP can be more efficient than a 65watt CPU...

              outside of stupid benchmark the 1000watt TDP cpu would be faster in idle mode and in the end win agaist the 65 watt TDP CPU.

              Phantom circuit Sequence Reducer Dyslexia

              Comment


              • #27
                Originally posted by Sin2x View Post
                335W peak consumption​ for a 125W desktop processor reflects the obsolete nature of 10nm manufacturing process well enough for me. Your attempts at defending Intel in this matter look pathetic, to say the least. And no, you can't fight physics with marketing and configuration gimmicks.
                you have to unterstand that in modern hardware design the max TDP is not a relevant metric

                also the manufacturing node if inferior than the intel 10nm node is not an relevant metric to...

                intel could produce a 1000WATT TDP cpu at 10nm who outside of stupid benchmarks always win the race to idle and in the end wins the overall power consumtion.

                https://en.wikichip.org/wiki/race-to-sleep

                of course in benchmark this does not work but in real life workload this works...

                just to make an example in Blender3D 3.4... the hardware with the biggest power consumtion would win the race to idle war
                the hardware with the highest TDP always wins this race to idle game:
                overclocked nvidia RTX4090 at 600watt TDP is the NR1... second place is the intel 13900K also amd 7950X is good because of the high max TDP also the 6950XT gpu is good because of the high TDP...

                slower hardware with less TDP get big penalty because of the race to sleep war... means overall it consume more power on low-TDP hardware..
                Phantom circuit Sequence Reducer Dyslexia

                Comment


                • #28
                  Originally posted by piotrj3 View Post
                  First any "nm" is purely marketing term. 10nm is any arbitrary number, and Intel 10nm is more efficient then 7nm TSMC.
                  Second, by far most important metric in power consumption is frequency as power consumption will increase exponentially with higher frequency. This has nothing to do with 10 nm or 7nm, simply 5.8GHz will be hot regardless if it is from AMD or Intel, just Intel is only one daring to go that far.
                  Third. Then you should totally bash on zen 4 because in cinebench/blender 5950X is producing more work per watt then 7950X (and it is quite notable). Why because AMD also pushed frequency up.
                  Forth. Intel almost nowhere uses 125W figure, they use in some places 253W figure, but Intel is not strict in enforcing power limits, so a motherboard manufacturers commonly put PL1 unlimited and you have 335W power consumption. In reality Intel set to 253W (official guidenace) is very competitive with Ryzen 7950X set out of box (that will be constantly pushing 230-250W).
                  Fifth. Efficiency in rendering test like Cinebench/Blender is garbage (Who uses CPUs for rendering in 2022 please leave the room and rethink your life). Efficiency per watt is important in your gaming session, in webrowsing, idle power consumption, if you make a lot of programming maybe in code compilation. And how does there Intel compare - in gaming 13900k is more efficient FPS/watt then 7950X ( proof https://youtu.be/H4Bm0Wr6OEQ?t=418 ).
                  This is a thing, I don't care what is peak power draw (or at least i dont' care much because it might impact a little my choice of PSU, but not dramatically). I care about what is power consumption in gaming (that i do quite a lot). What is power consumption in typical Firefox webrowsing. I care what is power consumption when idle. I don't care about rendering in cinebench/blender. I care little about decompression/compression efficiency as that takes maybe few seconds of a day. Code compilation i might compile something few times but typical projects i am making will again compile in seconds so it is not significant comparing to whole day. 2 hours of gaming per day - yup that will accumulate to quite a lot of energy. 10 hours of firefox or chrome - that will be even more energy. The only workload that is quite heavy on power draw and i can imagine most people care about is software encoding of videos. But they are not perfectly scalling workloads in manner of cores so results aren't looking like in cinebench or blender.
                  you claim high frequency is bad for power consumtion this is true for benchmarks but outside of benchmarks you can win the run to idle game with high frequencies means the job is done more faster and the system is in idle mode faster and overall you win in power efficiency.

                  https://en.wikichip.org/wiki/race-to-sleep
                  Phantom circuit Sequence Reducer Dyslexia

                  Comment


                  • #29
                    Originally posted by Sin2x View Post
                    piotrj3 You didnt't even bother to check that Zen 4 utilizes 5nm process, not 7, Intel fanboi. The rest of your post is more of the same pathetic cringe as the previous commenter so graciously provided for my amusement. Hell, I almost choked myself laughing seeing " Intel almost nowhere uses 125W figure" when this is literally the official TDP of the processor with 253W max Turbo that is still a whopping 82W lower that what happens in the real world. Keep fighting physics with your quixotic​ deflections and keep selecting statistically outlying benchmarks like your Far Cry 6 video example to prove your points, I can't get enough of you! Idiocracy at its finest.
                    there are multible ways to get better end-result get better manufacturing node is one way...
                    if you like intel are stuck to outdated node like intel 10nm there are multible ways to in the end get the same or even better energy efficiency.
                    https://en.wikichip.org/wiki/race-to-sleep

                    intel for the 11000/12000/13000 series play this race to idle game card.

                    outside of benchmarks (race to idle does not work for benchmarks) most people save energy with the 12900K/13900K cpus compared to older intel cpu designs and also compared to older AMD cpu designs....

                    "Keep fighting physics"

                    race to idle war is very good in the keep fighting physics game.
                    Phantom circuit Sequence Reducer Dyslexia

                    Comment


                    • #30
                      Originally posted by atomsymbol
                      You are concentrating your attention to a single number (=10nm) too much. The primary cause of high all-cores-utilized power consumption isn't the 10nm manufacturing process.
                      most people claim the FX9590 was a bad cpu... but if you count in the race to idle game it was a good cpu.
                      Phantom circuit Sequence Reducer Dyslexia

                      Comment

                      Working...
                      X