Announcement

Collapse
No announcement yet.

Intel Core i9 12900K On Linux Reigns "King Of The IOPS-Per-Core"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel Core i9 12900K On Linux Reigns "King Of The IOPS-Per-Core"

    Phoronix: Intel Core i9 12900K On Linux Reigns "King Of The IOPS-Per-Core"

    It's been a while since last hearing anything of Linux block subsystem maintainer's Jens Axboe crusade on achieving the maximum possible IOPS-per-core. However, on Friday he was out with his latest insight in still declaring Intel's Core i9 12900K "Alder Lake" processor as being the king of IOPS-per-core performance at nearly 13M IOPS per CPU core...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Yes, but what's the power cost of those IOPS?

    Comment


    • #3
      Originally posted by BlueCrayon View Post
      Yes, but what's the power cost of those IOPS?
      this is about testing the code to its limits to remove bottlenecks, whoever really needs that much iops i'm sure they are willing to invest in it for the returns.

      Plus i am really not surprised about the result, first because all compilers pre-ryzen would basically(and rightfully so) include general optimization in their code that was developed on mostly intel pcs on x86, so i bet that not all of those would translate perfectly to an AMD cpu( i bet the guys developing gcc right now have all transitioned like linus on threadripper or in any case zen3, so the trend should start to slowly reverse) and because intel still has the single thread crown, by not much but still has it. The fact that it only goes from 13 to 13,07 to me sounds like there is still performance left on the table on the iops code, the increase is too moderate for single core performance going from threadripper/epyc to alderlake.
      Also consider that i bet optane has some kind of hidden away optimization built-in inside the motherboard chipset that we don't know about when paired on an intel system.
      Last edited by sireangelus; 19 February 2022, 07:40 AM.

      Comment


      • #4
        Don't care about last 10 or 20% of performance.
        Main issue of new machines is SPYWARE. If you are doing something on new machines and you lack performance, you are doing it seriously wrong.
        Latest updates in main SW ecosystem ( Win11 etc), new upcoming Ryzens are scary.
        SAdly, they are brighter part of the spectrum.
        Intel is FAR worse.
        Just for fun, I watched an interview with Pat Gelsinger.
        Even in first 15 seconds of I could tell a guy is psychopat. Rest of it revealed he's working with Agenda eugenics.

        I think my nexz machine with be something with RISC-V, preferrably from Chinese production lines...

        Comment


        • #5
          Originally posted by BlueCrayon View Post
          Yes, but what's the power cost of those IOPS?
          According to Linus with early initial reviews, AMD still beats intel when it comes to power efficiency for lower wattage https://www.youtube.com/watch?v=wNSFKfUTGR8. The P/E cores seem to have helped Intel but the other side of the coin is that Intel is still getting beat by a simpler heterogeneous architecture that AMD has via TSMC which just shows how far behind Intel's node architecture is from AMD/TSMC.

          Comment


          • #6
            Originally posted by Brane215 View Post
            Don't care about last 10 or 20% of performance.
            Main issue of new machines is SPYWARE. If you are doing something on new machines and you lack performance, you are doing it seriously wrong.
            Latest updates in main SW ecosystem ( Win11 etc), new upcoming Ryzens are scary.
            SAdly, they are brighter part of the spectrum.
            Intel is FAR worse.
            Just for fun, I watched an interview with Pat Gelsinger.
            Even in first 15 seconds of I could tell a guy is psychopat. Rest of it revealed he's working with Agenda eugenics.

            I think my nexz machine with be something with RISC-V, preferrably from Chinese production lines...
            i have no idea what you just said. Plus, that many iops is used usually in very large databases OR massive virtualization/containerization.

            Using 100% of a single core to drive 13milion iops means that when you need the original 3 MIOPS that he started with, you only need 25% cpu. That is a massive 75% reduction in cpu overhead, that frees up cpu power to do other things.

            On the horizon we have pcie5 rated at 14gb/s. Current mid range nvme drives are in the range of 30 to 50 kiops, but i bet it's gonna go up in the future.

            This work is mainly oriented towards server space.

            On the spyware front, you mean all those microsoft services that do stuff in the background collecting data?
            The difference on my g14 between linux and windos on ultra optimized configs(including custom fan profiles and ryzen controller upping the tdps) on geekbench is about 5% on single core and even less on multi-thread. Sure, i still need to change thermal paste but i don't thing adding 5W of tdp is going to do such a great difference.

            The main difference between linux and windows on most phoronix tests i believe is the compiler and the filesystem.
            We need to be honest and admit that windows does an amazing job at managing background activity so that it will not impede the main program you're running, even though you end up with additional hundreds of processes in the background compared to linux.


            Anyway, apart from wanting to go down the street and march against pluton(stupid example: valorant refuses to run on w11 if you don't have secureboot on. runs fine on w10.), because it's palladium all over again and we should rise up like we did back then, i think you are naive if you think that buying chinese parts(Ps: look up for hidden chinese spy chips into supermicro motherboards) will be the solution.
            Last edited by sireangelus; 19 February 2022, 09:01 AM.

            Comment


            • #7
              Originally posted by mdedetrich View Post

              According to Linus with early initial reviews, AMD still beats intel when it comes to power efficiency for lower wattage https://www.youtube.com/watch?v=wNSFKfUTGR8. The P/E cores seem to have helped Intel but the other side of the coin is that Intel is still getting beat by a simpler heterogeneous architecture that AMD has via TSMC which just shows how far behind Intel's node architecture is from AMD/TSMC.
              Linus compared CPU based around higher nominal TDP (80W) vs one that is based around 45W. Of course in that case it is expected at same TDP, CPU optimized for lower power usage wins. It is like comparing normal desktop CPU lowered to some abnormally low TDPs vs mobile CPUs designed from the start for that TDP.

              In Der8auer tests https://youtu.be/njjH1TN3Fag results for desktop CPUs are totally opposite.

              Comment


              • #8
                Originally posted by Brane215 View Post
                If you are doing something on new machines and you lack performance, you are doing it seriously wrong.
                Not everyone is a filthy casual mobile monkey who just uses their PC to browse the web or play games.

                Comment


                • #9
                  Oh, I know what Brane215 is getting at. Most of us have more computing power at our finger tips than we know what to do with. I set my Linux AMD Ryzen machines to not boost because they are way fast as is. I've never 'maxed' out the cores but for a few seconds.... And I do a bit more than just browse (I don't waste time with games either) as I am a computer programmer by trade.... Now back in the Z-80, 68000, early x86 days ... I was always looking for 'more' power. Not so with the computers in the last 5 years or so.
                  Last edited by rclark; 19 February 2022, 06:07 PM.

                  Comment


                  • #10
                    Originally posted by piotrj3 View Post

                    Linus compared CPU based around higher nominal TDP (80W) vs one that is based around 45W. Of course in that case it is expected at same TDP, CPU optimized for lower power usage wins. It is like comparing normal desktop CPU lowered to some abnormally low TDPs vs mobile CPUs designed from the start for that TDP.

                    In Der8auer tests https://youtu.be/njjH1TN3Fag results for desktop CPUs are totally opposite.
                    Der8auer is testing CPUs of different generations. 5950X is a previous gen AMD CPU where as i9-12900K is Intel's most current generation that just came out. AMD's next gen desktop CPU's (i.e. 6000 series for desktop) is yet to come out, unlike mobile 6000 series which was released earlier.

                    The 5950x came out November 2020 which makes it over a year old.

                    This is why the laptop review was more relevant, its comparing the current gen of both CPU's which is more of an apples to apples comparison.
                    Last edited by mdedetrich; 19 February 2022, 06:11 PM.

                    Comment

                    Working...
                    X