Announcement

Collapse
No announcement yet.

7.4M IOPS Achieved Per-Core With Newest Linux Patches

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • 7.4M IOPS Achieved Per-Core With Newest Linux Patches

    Phoronix: 7.4M IOPS Achieved Per-Core With Newest Linux Patches

    Linux block subsystem maintainer and lead IO_uring developer Jens Axboe had a goal of hitting 7M IOPS per-core performance this week. On Monday he managed to already hit 7.2M IOPS and today hit 7.4M IOPS with his latest work-in-progress kernel patches...

    https://www.phoronix.com/scan.php?pa...Linux-Per-Core

  • #2
    What vulnerabilities will be exposed to get such performance?

    Comment


    • #3
      Originally posted by edwaleni View Post
      What vulnerabilities will be exposed to get such performance?
      None?

      Comment


      • #4
        Originally posted by tomas View Post

        None?
        I am thinking of the recent performance limiting actions with regards to caches, sideband behaviors and the recent need to flush and reload them.

        Comment


        • #5
          This with newest amd(if you can get one) and a super expensive optane gen2 ? The intel drive probably costs almost a car ?

          Comment


          • #6
            Originally posted by onlyLinuxLuvUBack View Post
            This with newest amd(if you can get one) and a super expensive optane gen2 ? The intel drive probably costs almost a car ?
            I think it is with 2 Gen2's, as it is more performance than one can handle.

            Comment


            • #7
              Originally posted by onlyLinuxLuvUBack View Post
              This with newest amd(if you can get one) and a super expensive optane gen2 ? The intel drive probably costs almost a car ?
              That's irrelevant. What this guy is doing, is making sure the code runs as fast as possible. He's just testing he's got rid of unnecessary overhead.

              Comment


              • #8
                Multiple OSs and chipsets have created security issues because of SMT (Intel's flavor is called hyperthreading), because one thread can watch the various latencies/behaviors of the CPU and infer what the other thread is doing. Because you can fingerprint those behavoirs you can infer via cache misses, pipeline stalls and related what's going on when say OpenSSL or OpenSSH is doing handling a private key.

                However the I/O performance improvements as of late have mostly been around efficiently implementing and optimizing IO_urings, which if anything tends to be more secure than other approaches. In particular the IO_Uring seems like a elegant design for moving data between user space and kernel space efficiently without the complications and potential race conditions of other solutions.

                IO_Uring has been widely discussed, reviewed, and seems like a really nice solution, so much so that people are migrating other syscalls to IO_uring, even when normal use cases are not or at least less performance limited. I've not seen any hint that IO_uring should reduce security.

                Anyone know exactly what FIO settings were used to generate the 7.4M Iops number?

                Comment


                • #9
                  this is fantastic, my favorite kind of story. the thing about these kinds of stories though is that nobody seems to have the right mindset. when somebody re-engineers code to do something way faster and more efficient than before, that means the previous implementation was doing it wrong. the new thing can be great and brilliant and cause for celebration and all, but it still also means the previous thing was doing it wrong, and i think we do ourselves as programmers a disservice by never acknowledging that. if you one day discover a direct route to the grocery store, where before your route consisted of first driving 500 miles in the opposite direction and then driving in circles for a week, its not so much that you engineered a brilliant new path, its that the previous thing was doing it wrong. if we were to be honest and objective, a lot of the time the way software is engineered and implemented, theres a lot of going in circles and going 500 miles in the opposite direction.

                  Comment


                  • #10
                    Originally posted by quaz0r View Post
                    this is fantastic, my favorite kind of story. the thing about these kinds of stories though is that nobody seems to have the right mindset. when somebody re-engineers code to do something way faster and more efficient than before, that means the previous implementation was doing it wrong. the new thing can be great and brilliant and cause for celebration and all, but it still also means the previous thing was doing it wrong, and i think we do ourselves as programmers a disservice by never acknowledging that. if you one day discover a direct route to the grocery store, where before your route consisted of first driving 500 miles in the opposite direction and then driving in circles for a week, its not so much that you engineered a brilliant new path, its that the previous thing was doing it wrong. if we were to be honest and objective, a lot of the time the way software is engineered and implemented, theres a lot of going in circles and going 500 miles in the opposite direction.
                    You're oversimplifying. Code that was fine on a single core CPU may not be optimal for the latest Threadripper. Code that wrote data nicely on a HDD, may not be that nice on a SSD. Tools and languages themselves evolve. Code that you may deem "correct" today, may have been unnecessary or too costly to implement yesterday.

                    Comment

                    Working...
                    X